How AI Startups Can Win With Better Strategy - Ep. 50 with Mike Maples

How AI Startups Can Win With Better Strategy - Ep. 50 with Mike Maples

Released Wednesday, 5th March 2025
Good episode? Give it some love!
 How AI Startups Can Win With Better Strategy - Ep. 50 with Mike Maples

How AI Startups Can Win With Better Strategy - Ep. 50 with Mike Maples

 How AI Startups Can Win With Better Strategy - Ep. 50 with Mike Maples

How AI Startups Can Win With Better Strategy - Ep. 50 with Mike Maples

Wednesday, 5th March 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

positioning is one of the most powerful ways

0:02

a startup can have an insight. Most people

0:04

think an insight is just about product. The

0:06

what is the product, the how is how

0:08

you deliver the product. And the how can

0:10

have an insight as well. And when you're

0:12

in a lawsuit, let's say you're Apple and

0:14

you're in a lawsuit with Samsung, there's a

0:16

ton of documents that have to get discovered

0:18

for the core case. They hire these outsourcer

0:20

firms of people to go pour through these

0:22

documents and they charge them on a cost

0:24

plus basis. So what Text IQ

0:26

said is, well, we've got AI Why

0:28

don't you just send us all your

0:30

documents and we'll send you back the

0:32

ones that are discoverable and we'll have

0:35

more accuracy. Now you're not competing for

0:37

software license or per seat revenue or

0:39

even a subscription price. You're saying I'm

0:41

a substitute for that labor spend. You

0:43

used to spend 50 million dollars a

0:45

year on this so I can do

0:47

it for a tenth of the price

0:49

and much better. If I'm a SaaS

0:51

vendor and I charge subscription by the

0:53

seat and that's all I've ever done.

0:55

Think about how embedded that must be

0:57

in the culture, right? Every product manager

0:59

thinks that way. The CFO thinks that

1:01

way. There's nobody in the company who

1:03

knows how to react to your strategy.

1:05

If you change your business model, everyone's

1:07

going to lose their mind. OpenAI is

1:09

moving from being this API developer tool

1:12

to like a product company. They're releasing

1:14

all of these consumer -facing product. A

1:16

lot of founders are thinking about what

1:18

if OpenAI includes this as part of

1:20

ChatGP here, includes this in some new

1:22

product that they release. And I'm curious

1:24

how you would think about... positioning that.

1:26

I'm involved with a company called Applied

1:28

Intuition and they create simulation software for

1:30

autonomous vehicles. If you're GM or if

1:32

you're Porsche or you're these big companies,

1:34

that's pretty valuable. But you can't just

1:36

get that when Sam Altman releases his

1:38

next demo. at a demo day event.

1:40

To succeed as a company like that

1:42

and to really ask for giant contracts

1:44

from these companies, you have to have

1:46

not only AI expertise and products, but

1:48

you have to have multi -discipline expertise.

1:51

Everybody says, kind of, this is on

1:53

these companies that are just an AI

1:55

wrapper, right? And I'm like,

1:57

well, if the thing that you're wrapping on

1:59

top of involves a process that you really

2:01

know about that most people don't, that may

2:03

be a path to a great company. Mike,

2:19

welcome to the show. Thanks for having me. I've

2:21

been looking forward to this. Yeah. For

2:24

people who don't know you, you are a

2:26

legendary investor at Floodgate, which is one of

2:28

the first seed firms. You're

2:30

an early investor in Twitch, Lyft, Okta, and a bunch

2:32

more. You're also the author of

2:34

the book, Pattern Breakers, which is an excellent book that

2:37

I've read. We've also reviewed it on every which

2:40

I guess I would summarize by it's sort

2:42

of like a guidebook about how there's no

2:44

guidebook to building companies. So

2:46

it's very, it's a little bit Taoist. A little

2:49

bit Zen. Yeah. Yeah, a little

2:51

Zen, which I love. I think that's

2:53

so, I think that's so good and

2:55

so important. You have a lot of

2:57

emphasis on like founders winning by being

2:59

extraordinarily different and breaking the established patterns

3:01

of like how you're supposed to run

3:03

a company. I loved it and I'm

3:05

excited to chat with you about that

3:07

and everything going on in AI on

3:09

the show. Yeah. Well,

3:11

cool. Let's get after it. Let's

3:13

do it. One of

3:15

the things that I'm

3:17

personally curious about is

3:19

you started investing when

3:21

seed wasn't really a

3:24

thing and helped to invent

3:26

this new way of capitalizing

3:28

companies for an

3:30

earlier era of startups, pre -AI startups, let's

3:32

just say that, right? And I think that

3:35

that is an example of the kind of

3:37

thing that you talk about in your book,

3:39

Pattern Breakers, which is like taking a look

3:41

at the landscape of maybe what companies need

3:43

and how companies are funded and being like,

3:45

well, there's this thing that seems to make

3:47

a lot of sense to me that there

3:50

should be a seed stage funding mechanism

3:52

and just going and doing it. And

3:55

I'm kind of curious My

3:59

feeling right now is that AI

4:01

is radically changing the economics of starting

4:03

a business. Software

4:06

orders a magnitude cheaper to make today than

4:08

it was 10 years ago. I'm

4:11

curious, using that same sense

4:14

of, okay, I'm looking at the environment

4:16

and looking at how things change, and

4:18

I'm maybe pushing away the established structures

4:20

for a second. How do you think

4:22

that that might change investing in how

4:24

companies raise money and all that kind

4:26

of stuff? Yeah, I've been

4:28

wondering about this a lot lately. So

4:30

as you know, one of the things

4:33

that I emphasize in startups is the

4:35

power of harnessing inflections, right? So I

4:37

like to say that, you know, business

4:39

never a fair fight. And

4:41

the startup has to have some unfair

4:43

advantage way to win. And the way

4:46

they do that is they harness inflections.

4:48

Inflections allow the startup to wage asymmetric

4:50

warfare on the present and show up

4:52

with something radically different. without

4:54

inflections, they have to play in the incumbent

4:57

sandbox. And so they're limited in their upside.

5:00

So every now and then, though, you get something

5:02

that I like to call a sea change. And

5:04

when I was a kid, the sea change

5:07

was a mass computation in the personal computer.

5:10

And computers used to be really expensive. And

5:13

then they became asymptotically free and ubiquitous.

5:15

And you had one on every desk

5:17

in every home. And a whole

5:19

new set of companies emerged. Software

5:21

became a real business for the

5:23

first time. Software used to be

5:25

what you gave away because mainframes

5:27

were expensive. You had to keep

5:29

them running all the time And

5:31

so so the assumptions got inverted

5:34

and you had a bunch of

5:36

companies using the software licensing model,

5:38

you know Oracle Microsoft SAP companies

5:40

like that Then you had in

5:42

the 90s the era of mass

5:44

connectivity, which I think was extended

5:46

with the iPhone and in mass

5:48

connectivity rather than Processing power becoming

5:50

free, communications bandwidth starts to become

5:52

free, and you start to not

5:54

just have computers everywhere, but

5:56

you have everybody in the world and every device

5:58

in the world connected in these networks. And

6:01

new business models came out of that,

6:03

subscription and SaaS and advertising. It's

6:07

interesting, there aren't any software licensing

6:09

model companies started after 1990 that

6:11

really mattered. All those

6:13

companies got subsumed in Microsoft because they could

6:16

put it in the OS or outcompetum.

6:19

So like why do I think the AIC

6:21

change matters? What I

6:24

see happen with these C

6:26

changes is that some business

6:28

models become relatively more attractive

6:30

and some business models become

6:32

relatively less attractive. And

6:34

there's only nine business models that I

6:36

know of in human history. And

6:39

so the most recent business model I

6:41

know of is 250 years old. It's

6:43

the subscription model. And so You know,

6:45

what I what I like to do

6:47

is I like to say, OK, if

6:49

there's nine business models so far in

6:51

humanity and every time there's a technology

6:53

sea change, there's a migration of attractive

6:55

business models from one set to the

6:57

other. How might that migration

6:59

occur this time? Because what you want

7:01

when you're a startup is to be

7:03

counter positioned to the incumbents. You know,

7:06

this whole the incumbents have the advantage.

7:08

Discussion is wrong headed. Of

7:10

course, the incumbent has the advantage if you play

7:12

by the rules of the incumbents. But

7:14

what you want to do is you

7:16

want to say, how does AI make

7:18

some business models relatively more attractive and

7:21

less attractive? And how can

7:23

I as a startup exploit those

7:25

new opportunities? Not just insight in

7:27

my product, but some type of

7:29

an insight in my business model

7:31

go market strategy that disorient incumbents

7:34

and where they have a disincentive

7:36

to retaliate or to copy your

7:38

strategy. So that's mostly what I'm

7:40

looking at these days from an

7:42

AI point of view. Yeah,

7:45

so I think like one of the things

7:47

that I see a lot from the business

7:49

model perspective and right now we're talking about

7:51

business models for startups. I would also like

7:54

to talk about business models for venture like

7:56

funding startups. Sure. But business

7:58

models for startups just to start there

8:00

for a second. One

8:02

of the things I'm seeing a lot of

8:04

is paying per outcome as opposed to paying

8:06

per month. Yes. Which I think is a

8:09

really interesting one. Is that something you have

8:11

your eye on? Oh, absolutely.

8:14

So, you know, there's a

8:16

business model called tailored services

8:18

with long -term contracts. And

8:21

right now, most people think that's

8:23

unattractive. What is tailored services of

8:25

long -term contracts? That could be

8:28

like the defense subprimes. It could

8:30

be a contract research organization for

8:32

a pharma company. You

8:34

know, it's somebody that you that

8:36

offer services on a contract basis,

8:39

usually is labor intensive, usually is

8:41

cost plus. And the

8:43

conventional wisdom today is those are not

8:45

attractive opportunities for software companies. Like

8:48

a law firm or something? Like

8:50

a law firm, perfect example. So

8:52

like an example, like a law

8:54

firm or legal services, a

8:57

company I was involved with a few years ago was

8:59

called Text IQ. And they would

9:01

go to a big corporation and they would say, you

9:03

know, when you're in a lawsuit, let's say you're Apple

9:05

and you're in a lawsuit with Samsung, there's

9:08

a ton of documents that have to

9:10

get discovered for the court case. And

9:12

so the way that happens in reality

9:14

is they hire these outsourcer firms of

9:16

people to go pour through these documents

9:18

and they charge them on a cost

9:20

plus basis. Yeah. And so what Text

9:22

IQ said is, well, we've got AI.

9:25

Why don't you just send us all your documents and

9:27

we'll send you back the ones that are discoverable and

9:29

we'll have more accuracy. Well, now

9:31

you're not competing for software license

9:34

desktop revenue or proceed revenue or

9:36

even a subscription price. you're

9:38

saying, hey, look, I'm a substitute for that

9:41

labor spend. You used to spend

9:43

$50 million a year on this contract

9:45

outsourcer that sorts through these documents. I

9:47

can do it for a tenth of

9:49

the price and much better. And

9:52

now you're competing over that labor

9:54

cost bucket rather than the software

9:56

spend bucket. And how many seats

9:58

do I get? Well, that's interesting

10:00

because there's cost per task done.

10:02

So it's cost per document processed

10:04

or whatever, which is sort of

10:06

like what OpenAI does when you

10:08

send them a prompt, they send

10:11

a response. But

10:13

even if they send the response and the response

10:15

isn't good, you still pay for it. Right? And

10:18

then there's other companies that are sort

10:20

of capturing the value, part of the

10:22

value that they generate. So it's, it's,

10:24

it's, if they increase your, let's say,

10:26

let's say it's a SDR bot, if

10:29

they increase your sales by some amount,

10:31

your close rate, they take a percent

10:33

of that, only when it's successful. Have

10:36

you looked at those two? Yeah.

10:38

And so I do like

10:40

the outcome based pricing models

10:43

a lot. You know,

10:45

they both have their virtues, right? The thing

10:47

about open AI is like you could use

10:49

Dolly to generate some art that you don't

10:51

think looks pretty enough. But

10:53

open AI probably deserves to be compensated for

10:56

the fact that you did that, right? Yeah.

10:58

It's sometimes hard to know if the job was

11:01

done well or not. It's like, it's not so

11:03

clear. And sometimes it's the customer's fault that the

11:05

job wasn't done well, right? It's tricky. You

11:08

know, back in my ancient days when I was a

11:10

founder, I used to have

11:12

this expression when I would sell enterprise software.

11:15

I called it, what does it take to ring

11:17

the bell? And so like if

11:19

you go into the carnival, you know how there's that thing

11:21

where you have this big mallet and you hit this thing

11:23

and hopefully it goes all the way up and rings the

11:25

bell. But if it doesn't go all the way up, it

11:27

makes no sound. It has to go all the way up

11:29

and ring the bell. It's binary. And

11:31

so what I used to say to

11:33

the folks that I would work with

11:36

is that The customer doesn't care that

11:38

your software ran according to how the

11:40

specification works. That's not what they're buying.

11:42

They have a job to be done.

11:45

They're hiring your product to do a job. And

11:48

so we need to understand what's it going to take

11:50

to ring the bell for doing that job. And

11:52

if we ring the bell, they're going to say,

11:55

this is amazing. I want more of this. If

11:57

it doesn't ring the bell, they're not going to

11:59

care that the mechanism of our system works. They're

12:01

not going to be interested in that. And

12:04

so for me, the outcome -based models

12:06

that we were just talking about a

12:08

minute ago are kind of asking that,

12:10

what is the job to be done

12:12

in a Clay Christiansen sort of lens?

12:15

And then what does it mean to ring the bell? And

12:18

can I get paid if I unambiguously

12:20

succeed at that over and over again?

12:22

And the thing that makes that I think interesting

12:25

over like a SaaS model is that the incumbents

12:27

are all going to be SaaS. And

12:29

if you're guaranteed to get 20 bucks a

12:31

seat or whatever it is, the

12:34

idea of moving to like a pay for

12:36

performance model is like very unappealing. That's

12:38

right. So startups to your counter positioning point,

12:40

like that's a thing that startups can do

12:42

that incumbents like some incumbents already do this.

12:44

Like in the customer service world, this has

12:46

been a thing for forever. But in general,

12:49

Um, this is not a thing. And so

12:51

incumbents are not going to be able to

12:53

do this very well. Yeah. I think that

12:55

this counter positioning thing is a really important

12:57

thing to maybe double click on. And so

12:59

like, um, a great example

13:01

is in the nineties, if you were

13:03

a startup, the words that you dreaded

13:06

to hear was Microsoft has decided to

13:08

compete in your market because you're just

13:10

like, okay, I guess I'm out of

13:12

business because even if they start losing,

13:14

they're just going to bundle this thing

13:16

in windows. And I'm just host, right?

13:18

And so. that was happening to a

13:21

lot of companies you know netscape just

13:23

disappeared basically because microsoft decides to bundle

13:25

the browser you know in the operating

13:27

system and go go full ham right

13:29

against netscape. Well then

13:31

the internet happens and then some

13:33

people start to discover that you

13:35

can monetize not by selling by

13:37

the cedar by the desktop by

13:39

selling ads. And that was

13:42

google and microsoft had no answer to

13:44

that. You can't bundle something in your

13:46

operating system and deal with the fact

13:48

that Google is pricing by ads, right?

13:51

It doesn't solve the problem. It doesn't

13:53

impact their business at all. And

13:56

so Google was counter positioned to Microsoft

13:58

from business model perspective. And

14:00

like counter positioning is one of the

14:03

most powerful ways a startup can have

14:05

an insight. Most people think an insight

14:07

is just about product, but it can

14:09

also be about the what is the

14:11

product, the how is how you deliver

14:13

the product. And the how can have

14:16

an insight as well. And quite often,

14:18

the very best, most valuable companies have

14:20

an insight around business model that's facilitated.

14:22

Google's business model couldn't work before the

14:24

internet. The technology wouldn't

14:26

have provided the empowerment necessary for Google

14:29

to monetize with ads, but now all

14:31

of a sudden it did. And

14:34

so that's what we look for with

14:36

this counter positioning. And to

14:38

your point, right? Like now, it's sell the

14:40

work, not the software. If I'm

14:42

a company, if I'm a SaaS vendor and

14:44

I charge subscription by the seat, and that's

14:46

all I've ever done, think about

14:48

how embedded that must be in the culture,

14:50

right? Every product manager thinks that way. The

14:53

CFO thinks that way. You

14:56

know, there's nobody in the company who knows

14:58

how to react to your strategy because the

15:00

investors think that way. Everybody does. You know,

15:02

if you change your business model, everyone's going

15:04

to lose their mind. Yeah. So how, how

15:06

would you even think about changing it? Midstream

15:09

you just it's just that even if

15:11

you knew to have the insight that

15:13

perhaps you should consider it you just.

15:15

You just wouldn't have the wherewithal to

15:17

do it because it just it's so

15:19

embedded in your culture your entire value

15:21

delivery system is predicated on different model.

15:24

Yeah well let's keep talking about about

15:26

counter positioning and I want to bring

15:28

up I think like if I have

15:30

to pick who the Microsoft is of

15:32

the AI world. Like

15:34

huge huge huge tech companies like Microsoft

15:36

and Google aside. I think

15:39

the one right now to think about

15:41

kind of positioning or at least a

15:43

lot of startups are afraid of is

15:46

OpenAI. OpenAI is moving from being this

15:48

API developer tool to like a product

15:50

company. They're releasing all these consumer -facing

15:53

products. ChatGBT is sort of like taking

15:55

over. And so I think a lot

15:57

of founders are thinking about, well, what

16:01

you know, Chatchapiti includes this in their,

16:03

OpenAI includes this as part of Chatchapiti

16:05

or includes this in some new product

16:07

that they release. And I'm curious how

16:09

you would think about counter positioning that.

16:13

Yeah. So there, there are a couple of

16:15

ways. There are a couple of things I

16:17

find really interesting about OpenAI from a counter

16:19

positioning. So maybe, maybe we start with startups

16:21

and then just, there's some general stuff too,

16:23

like with deep seek and things like that.

16:25

But, but, um, so. Like,

16:28

let's just take an example. I'm

16:30

involved with a company called Applied Intuition, and

16:33

they create simulation software. I

16:35

love that name, by the

16:37

way. Yeah, it's pretty good.

16:39

It creates simulation software for

16:41

autonomous vehicles and also technology

16:43

stacks for electric vehicles. And

16:46

these car companies, other than Tesla, don't really

16:48

know how to do EVs, don't know how

16:51

to do AVs. They don't really even know

16:53

how to do software, right? Their entire business

16:55

model is predicated on a supply chain that's

16:57

100 years old where they get parts from

17:00

Bosch and chips from all these people and,

17:02

you know, parts from different tool and dye

17:04

shops and everything else. So,

17:06

so applied intuition says, OK, we've got

17:08

a bunch of people from Google and

17:10

Waymo and now some people from Tesla

17:13

and all the best autonomous vehicle, all

17:15

the best EV companies in the world.

17:18

We can build the entire thing that

17:20

you need. to sort of

17:22

update your strategy and roadmap to have

17:24

the software to find car, which is

17:26

where the future is going. Now,

17:30

if you're GM or if you're

17:32

Porsche or you're these big companies,

17:34

that's pretty valuable, but you can't just

17:37

get that when Sam Altman releases his

17:39

next demo at a demo day event,

17:41

right? Like, you know, if you're gonna

17:43

have a software to find car, there's

17:46

a whole lot of things that you

17:48

have to know intimately. the

17:50

processes of how cars are made

17:52

and manufactured and tested the whole

17:54

supply chain and you know how

17:56

the delivery system works. And

17:59

so you to succeed as a company like

18:01

that and to really ask for giant contracts

18:03

from these companies. You have

18:05

to have not only AI expertise and products

18:08

but you have to have multi discipline expertise.

18:11

you know, so like Casper and Peter, they

18:13

grew up in Detroit. But, you know, before

18:15

they got in Google and Waymo, they were,

18:17

you know, in the car industry at GM.

18:20

And so, yeah, so I like companies

18:22

like that where, you know,

18:25

one way I like to think about is

18:27

everybody says, is kind of disses on these

18:29

companies that are just an AI wrapper, right?

18:32

And I'm like, well, if the

18:34

thing that you're wrapping on top

18:36

of involves a process that you

18:38

really know about, that most people

18:40

don't, that may be a path

18:43

to a great company. And

18:45

so I think that that's what I'm interested

18:47

in, is some of those. The AI wrapper

18:50

thing was so silly. I see

18:52

less of that now, which is nice. But

18:54

it was a very silly thing when it first

18:56

started. So one other thing

18:58

about this counter positioning and open AI

19:00

that I think is interesting, and I'd

19:03

love to get your read on, is

19:05

one way I have internalized the deep

19:07

seek stuff, is that in the early

19:10

days of the internet, all

19:12

of the researchers from Bell

19:14

Labs and folks from AT

19:16

&T, Time Warner, the

19:19

government said, this internet thing's a

19:21

toy. It's never to be good enough.

19:25

We've tried this before. It doesn't work. These

19:27

protocols are not going to be robust enough.

19:30

And in the short term, you would have been right. None

19:33

of these things looked all that interesting or

19:35

impressive. But, you know, was

19:37

talking to Steve Sinovsky about this the other

19:39

day. You know, was at Microsoft at

19:41

the time when the internet took off. And he was

19:43

at Cornell and he saw, see, you see me and

19:46

he goes to Gates. You know, this is going to

19:48

be a tidal wave. This is going to be a

19:50

giant new phenomenon that we got to really pay attention

19:52

to. Deepseek reminds

19:55

me of that. So like the culture

19:57

in AI, the hyperscalers right

19:59

now is you can solve all problems

20:01

by throwing money at it. And

20:04

the deepseek guys. who said, well,

20:06

if we're limited with some fundamental

20:08

constraints, what would we do? I

20:11

think that there's going to be

20:13

a cultural shift in AI where

20:15

many people adopt that mindset. And

20:17

you know, that's important because the

20:19

early days of mass computation, the

20:22

IBM PC had a 640K memory limit.

20:25

And so like the Microsoft programmers had

20:27

an advantage because they could write small

20:29

fishing code. It wasn't how many thousand

20:31

lines of code anymore. It was how

20:33

efficient is your code. And

20:35

I think that we might

20:37

see the same phenomenon here

20:39

where people come from the

20:42

bottoms up with very frugal,

20:44

sort of low cost by

20:46

design solutions. And it'll

20:48

be hard for the open AIs and

20:50

the anthropics and those guys. I mean,

20:52

I have huge respect for what they're

20:54

doing, but it'll be hard for them

20:57

to respond to that because they're culturally

20:59

embedded in their operating model is to

21:01

solve everything by throwing money at it.

21:03

you know, hire the best people, throw

21:05

money at it and just keep going,

21:07

keep going faster. That's

21:10

so interesting. You said so many things

21:12

I want to talk about. So one

21:14

is sort of like this, this toy

21:16

thing where people and governments are like

21:18

big companies, like sort of ignore the

21:20

internet at first because they're like, we

21:22

tried it and it doesn't work. It

21:24

doesn't scale or whatever. You have the

21:26

same history with neural networks where like

21:28

in the beginning of AI and symbolic

21:30

AI, like in the 50s, neural networks

21:32

were around then. But they were mostly

21:34

ignored because the early AI people, particularly

21:36

like Marvin Minsky, proved

21:40

that single layer neural networks

21:42

were not as powerful as

21:44

other types of like Turing

21:47

machines, basically, or couldn't

21:49

do certain types of computations. And

21:52

I think academia sort of... by

21:55

and large felt like neural networks were

21:58

not understandable enough. There was no theory,

22:01

and so it felt like a

22:03

toy and it was basically ignored,

22:05

except for a few kind of

22:07

neural network researchers in the 80s

22:09

and 90s, and then industry adopted

22:11

it, and it blew up because...

22:13

well, it just works. Who cares?

22:16

Who cares what the theory is,

22:18

which I think happens all the

22:20

time. And I'll stop there. I'm

22:22

curious. Curious if you have anything to add

22:24

to that. Yeah. And it's funny because I

22:26

even like in when I was working on

22:28

this book, you know, with pattern breaker stuff,

22:30

one of the examples I used was the

22:32

Wright brothers with the airplane. And

22:34

so all the experts said it's going

22:37

to take a million years for to

22:39

create a flying contraption. that can fly

22:41

humans in it. The New

22:43

York Times ran an ad

22:45

called Flying Machines That Won't

22:48

Fly, and it

22:50

said that it wasted time to

22:52

try, and they had quote from

22:54

the head of engineering of the

22:56

army and all this stuff. 69

22:59

days later, the Wright brothers at Kitty

23:01

Hawk flew their first plane, and

23:04

there were a couple of bicycle mechanics. And

23:06

so what you see is that the time and

23:09

again, the experts are attached

23:11

to their mental model

23:13

how the world works.

23:15

And it's the tinkerers.

23:17

It's the people who

23:19

have permissionless innovation, who

23:21

just tinker with stuff and make something work.

23:24

And before you know it, they

23:26

have to even change the science.

23:28

People's understanding of Bernoulli's equation and

23:31

all that stuff got modified and

23:33

improved because of the success of

23:35

the Wright brothers with their planes.

23:38

You know, people tend to think

23:40

that the abstract science precedes the

23:42

engineering, but quite often the engineering

23:44

and the tinkering causes the science

23:46

to evolve to explain the unexplainable.

23:49

And yeah, that's what I see happen more

23:51

often in practice. 100%.

23:53

I think the next point that

23:55

you made is sort of like

23:57

this big money versus small team

24:00

thing, which I think happens all

24:02

the time too. Constraints

24:05

breed creativity. And I think, in

24:07

general, being

24:10

able to throw money at a problem means you don't have

24:12

to spend time thinking about how to make it more efficient.

24:15

And so I think your question

24:17

about, like, are the open

24:20

AI's and anthropics of the world in trouble? I

24:22

think that's an interesting one. I

24:24

would bet not. Right. I would, too. My

24:26

feeling about that is,

24:30

Uh, I mean, obviously the sort of cliche

24:32

thing is like, okay, it's going to stimulate

24:34

demand or whatever, which is fine. I think

24:36

that's true. I think that they'll be able

24:39

to integrate most like integrate this and having

24:41

more efficient servers and that can, that can

24:43

serve the, uh, the demand that they currently

24:45

have. I think, uh, I

24:47

think it's, I think we'll work. Um,

24:49

the thing that it seems like to me

24:52

that this opens up is I think we

24:54

have like mass AI figured out, which is

24:56

like, how do you scale these models up

24:58

so that like, a billion people can use

25:01

chat GBT. And how

25:03

do you make that efficient and smart enough

25:05

to work and all that kind of stuff?

25:08

But I think one thing

25:10

that people don't talk about

25:12

nearly enough is that the

25:14

capabilities of models today are

25:16

in many ways not limited

25:19

by the intelligence or the

25:21

intelligence of the technology. They're

25:23

limited by the risk profiles

25:25

of the companies that are

25:27

serving them. And if

25:29

you're a gigantic company, you're open AI and you're

25:32

literally like, you know, you have to go give

25:34

government briefings before you launch anything. You're

25:37

going to be pretty like careful about

25:39

what you, what you put out. And

25:41

I think the deep seek stuff is

25:43

interesting because it means that, and I

25:45

mean risk in all sorts of different

25:47

ways. There's lots of different ways to

25:49

take risk, but it means that small

25:51

teams can build little models for like

25:54

Problems that look like toys that you know

25:56

an open AI would be like well, we

25:58

wouldn't do this and I think that is

26:00

the like that's the big thing I don't

26:02

think that that takes away chat GBT, but

26:05

it does mean that it we we have

26:07

way more AI in different corners of the

26:09

world than we would have otherwise which I

26:11

think is not good Yeah, you know what

26:13

Dan one of my favorite examples of this

26:16

actually comes from the field of rocketry because

26:18

it's it's so visceral so like you know

26:20

Elon Musk he'll launch a Starship if it

26:22

blows up He's like, okay, well, we instrumented

26:24

it, we got telemetry, we'll make it better

26:27

next time. NASA's not going to do

26:29

that. If NASA launches

26:31

a rocket, they don't sit there and

26:33

say, easy come, easy go, it blew

26:35

up. And so the

26:37

fact that Elon has a different

26:40

risk profile and is not attached

26:42

to whether it's successful or capital

26:44

S, it changes the

26:46

calculus of what he can do. And it

26:48

changes the speed with which he can move.

26:51

And so, like I like to say that

26:53

in many cases, the big company is not

26:55

quote unquote dumb compared to the small company.

26:57

It's to your point, they have a different

26:59

risk profile. And they can't,

27:01

they're just certain things they can't do. Like when

27:03

I was working with the guys at Justin TV,

27:05

which became Twitch, if

27:08

they launched something that's insecure, so

27:10

what? Nobody knew who they were. But

27:12

Google can't do that, right? And

27:14

Microsoft can't do that. And the big

27:17

companies can't do that. Hollywood guys can't

27:19

do that. Netflix can't do that. So

27:21

not having to be burdened by what

27:23

could go wrong is a big factor

27:25

in trying things that could go right.

27:27

That makes total sense. I want to

27:29

go back to something that you were

27:31

talking about earlier, talking about

27:33

this company Applied Intuition, which

27:35

she said sells into large car manufacturers. And

27:37

I assume when a large car manufacturer buys

27:40

them, it goes into a Ford vehicle and

27:42

a customer is maybe using it and maybe

27:44

has no idea what it is, but they're

27:46

using it. Is that sort of how it

27:48

works? I think so. It is less of

27:50

an end user type of thing, although that

27:52

might change. I need to be careful what

27:54

I'd say. But the

27:57

primary customer is the car company that

27:59

says, oh my god, the architecture of

28:01

cars has changed. What do I do?

28:04

Yeah. So the strategy question

28:06

I want to ask you is how you

28:08

think about OEM relationships like that. Because I

28:10

think that's going to be a common thing

28:12

for a lot of AI companies, especially if

28:14

you're working on more foundational model type things,

28:16

is you're going to be integrated into something

28:18

else that has a consumer layer. And

28:21

that's where opening i started and then they

28:24

were like actually we want to own the

28:26

ux layer because that's how everything took off

28:28

as they figured out a form factor that

28:30

worked and then they have a data flywheel

28:32

there's all this stuff right. And my last

28:34

company was it was an oam and. That

28:36

is a difficult position when you're serving to

28:38

customers. There's like an end user and then

28:40

there's customer you need to sell to. It's

28:42

hard to generate a lot of power or strategic

28:45

advantage in that situation. And it's

28:47

hard to make a great product. And

28:49

I'm curious how you think about OEM

28:51

type strategies and when they work versus

28:53

when they don't. Yeah, it's tricky. What

28:57

are some examples of where it's worked? I'd

28:59

say applied is working really well. Intel?

29:01

It's been great. Intel was a

29:04

good one for the PCs. Another

29:07

good example would be Qualcomm

29:09

back in the day, with

29:12

licensing their spread spectrum technologies

29:14

and chips. And

29:16

so it can work. Broadcom would

29:19

be another Twilio, I guess. Twilio

29:22

is an interesting one. I like that. In

29:24

fact, I like thinking of Twilio as a

29:26

design -win business more than a dev tool.

29:28

I like that framing a lot, actually. The

29:31

term I like to use to

29:34

describe it is a design -win

29:36

model, where you want to become

29:38

viewed by the customer as integral

29:41

to their product strategy. If

29:43

they have a slide that shows all

29:45

these blocks and triangles and arrows and

29:48

stuff, you need to be a big

29:50

square. in that slide, what you provide.

29:54

Sometimes, like Twilio, you

29:56

solve a problem that they really have, but that

29:58

they just have no interest in solving on their

30:01

own. So if you're Uber, do

30:03

you really want to have an entire

30:05

team building a messaging update texting platform

30:07

that's a substitute for Twilio? Probably none

30:10

of your best developers want to do

30:12

that inside of Uber. And so you're

30:14

like, hey, I'll just pay. I'll

30:17

pay Twilio every time the earth turns a click

30:19

or I send a message. I'll send them

30:21

tiny fractions of a penny. That's okay. So

30:24

that can work. The other way

30:26

I think it can work is

30:28

if you solve something existential for

30:30

the customer. So like in the

30:32

case of the car companies. The

30:34

end customer or the customer the

30:36

customer you're selling to? For the

30:38

OEM actually. So like the problem

30:40

that the car companies have is

30:42

that the Tesla is just a

30:44

fundamentally different architecture than ice vehicles

30:47

right and it's not just it's

30:49

got a battery and they don't

30:51

it's it has to do with

30:53

how many what what their operating

30:55

system is like and how many

30:57

chips they have and how how

30:59

messages flow throughout their messaging bus

31:01

and like Tesla is designed the

31:03

way a car would be designed

31:05

by Silicon Valley type of thinkers

31:07

whereas the you know the ice

31:09

vehicles of today are mostly you

31:11

know an amalgam of a bajillion

31:13

parts suppliers that they've done business

31:15

with for very long time. And

31:18

it's kind of like whatever Bosch has

31:20

this year is going to be the

31:22

new windshield wiper sensor thingy that I

31:24

put in the Mercedes, right? And that's

31:26

how they've operated. So they

31:29

look at it and they're just like, look,

31:31

it's just a completely different paradigm of how

31:33

you'd build a car. And so

31:35

you need somebody that can be your

31:37

thought partner in how to build those

31:39

things. And so that can be

31:42

another kind of design win model that works.

31:45

That's interesting. Yeah. Hey

31:47

there, Dan here. I wanted

31:49

to take a one minute break from the episode to

31:51

tell you about our latest sponsor. All

31:53

right, let's play a game. What powerhouse

31:55

productivity tool is also free

31:58

for individuals? Not

32:02

that one. Try again.

32:07

You may not expect this, but

32:09

it's Microsoft Teams. Yep, the same

32:11

teams that Big Enterprise is swear

32:13

by also has a free plan

32:15

for individuals. Whether you're jamming on

32:17

a side project or bootstrapping a

32:19

startup or building a community, Teams

32:22

has all of the features that other platforms,

32:24

Nickel and Diny for using. You

32:26

can get unlimited chat, 60 -minute video

32:28

meetings, file sharing, and collaborative workspaces all

32:31

for free. And the real

32:33

magic is that everything is integrated

32:35

in one seamless collaborative workspace. That

32:37

means there's no more hopping between

32:39

different applications for messages, meetings, and

32:41

file sharing. Teams puts it all

32:43

at your fingertips to save you

32:45

time and money. So

32:47

ditch the app overload and the

32:50

subscription fatigue and use Teams to

32:52

experience effortless collaboration today. Are you

32:54

ready to streamline your workflow? Head

32:57

to aka .ms slash every to use Teams

32:59

for free. Your productivity will thank you and

33:01

so will your wallet. And now back to

33:03

the episode. First of all, the thing that

33:06

makes me think of is like there's this

33:08

like knife's edge, which is interesting of this

33:10

strategy, which is you have to be Um,

33:13

critical to their business, but somehow they

33:15

don't want to do it themselves, which

33:17

is like, there's very few things that

33:19

are like that. That's right. Um,

33:21

that, and that's really hard. Either you're critical and

33:24

they're like, maybe we'll work with you, but then

33:26

we'll buy you or we'll just replace you or

33:28

you're not critical. And then it's, it's horrible to

33:30

like try to sell that product. No one wants

33:32

to do that. That's, that's, and I love that

33:35

framing of it. I haven't quite. internalize

33:38

it that way, but you're right. They either don't

33:40

want to do it themselves because they just don't

33:42

want to, or they don't want to do it

33:44

themselves because they can't conceive of how they would.

33:48

And they're just like, even if I want

33:51

to, that's kind of academic. I can't. But

33:53

in both cases, it's something that they actively

33:55

choose not to do themselves. And

33:57

there's a persistent reason for that to

33:59

continue. Yeah. And I guess the

34:02

reason You know, like

34:04

an applied intuition would work is I'm

34:06

just, I'm thinking back to you mentioned

34:08

Clay Christensen. I'm sort of like thinking

34:10

about his conservation of attractive profits. Where

34:14

in the early days of new

34:16

technologies, you want one company to

34:18

integrate all the different steps of

34:20

the value chain, basically, because you

34:23

can iterate much quicker. So like

34:25

Tesla. They don't have this

34:27

huge web of different suppliers. They probably

34:29

have a few, but like a lot

34:31

of it, they're just doing themselves. Whereas

34:33

it sounds like, you know, like GM

34:35

or whatever has like thousands of different

34:37

modular manufacturers that they swap in and

34:39

out because like the architecture of the

34:41

car has been around for so long

34:43

that it's not changing. And so it

34:45

doesn't have to be integrated. It can

34:48

just be like, it can be very

34:50

modular, which I guess is a easier

34:52

OEM cell. like because applied can just

34:54

as long as they know that architecture,

34:56

they can sell into it versus like

34:58

a more vertical, more, more integrated company.

35:00

Yeah. Well, and here's how I internalize

35:02

that, Dan. So just to make sure

35:04

that we're on the same page or

35:06

the same language, like what, what I

35:08

understood from Clay, I've kind of got

35:10

a little bit of a crush, an

35:12

intellectual crush on Clay Christensen. I think

35:14

the guy was amazing and a great

35:16

human being. So, so what I understood

35:18

him to say is that In early

35:21

markets, the products are

35:23

never quite good enough. They

35:25

don't perform well enough. And so

35:27

what happens is vendors get rewarded

35:29

for having the integrated system because

35:31

the customers will pay incremental dollars

35:33

for incrementally better performance because they

35:35

value that enhanced performance. But

35:39

then what eventually happens is

35:41

the performance gets mostly good

35:43

enough. and and you know

35:45

what clay christians would call it is

35:47

overshot customers you know now i'm trying

35:49

to cram new features into my product

35:51

to get customers keep buying new things

35:53

that i sell them but now they

35:55

don't want the new things as bad

35:57

and therefore you you get this modularity

35:59

argument that somebody else shows up and

36:01

says look you're being overcharged you don't

36:04

have to have one guy be the

36:06

system integrator anymore in fact you can

36:08

just have a whole bunch of different

36:10

components that you can mix and match

36:12

and swap in and out And so

36:14

then the conservation of attractive profits, it

36:16

goes to the modular suppliers rather than

36:18

the integrated supplier, which I

36:20

think is happening. That was a

36:22

much better summary of conservation of

36:24

attractive profits than I gave you.

36:27

Well, I don't know. But that's the

36:29

brilliance of Tesla. Elon...

36:32

told him you should act like

36:34

a car company acts. You should

36:36

have modular components and suppliers in

36:38

the supply chain. Elon understood, no,

36:41

nobody can make an electric car that's good enough.

36:43

I have to control all the critical technologies because

36:45

I have to have the ability to have something

36:47

that rises to the level of good enough. Nobody's

36:50

ever had that before. So

36:52

like that's another reason, right? Architecturally,

36:55

he's just totally different, right? His

36:57

whole paradigm of how to build

36:59

a car is just different from

37:01

start to finish. So is that

37:03

an argument for AI companies like

37:05

owning the whole stack themselves right

37:07

now as they're sort of innovating

37:09

on what the products even look

37:11

like and customers are willing to

37:13

pay more for incremental value? Yeah,

37:16

what I like about what Clay used

37:19

to say was that what Clay Christensen

37:21

really had was a bunch of mental

37:23

models for innovators. And,

37:25

you know, whenever I think of a mental

37:27

model, I always like to ask under what

37:29

conditions? So under what conditions would

37:31

I want to be the complete integrated solution?

37:34

I believe that you want to be

37:37

the complete integrated solution if the customers

37:39

are desperate for more performance and will

37:41

pay for that enhanced performance, right? So

37:44

like before Nvidia, there were Silicon Graphics.

37:46

And like if you wanted to make

37:48

dinosaurs in Jurassic Park, you had to

37:51

buy the most expensive SGI machines, millions

37:53

of dollars worth. And if

37:55

you could make the graphics run twice

37:57

as fast, industrial light magic would pay

37:59

twice as much because it was mission

38:01

critical to render those dinosaurs overnight. But

38:05

now that there's chips

38:07

commoditized, NVIDIA has the

38:09

better model because they say, hey, I'll just

38:11

sell you off the shelf, these GPUs. So

38:14

I think that the question always becomes

38:16

under what conditions are you advantaged by

38:18

being the integrated solution? And under what

38:20

conditions are you advantaged by being a

38:22

modular component of the solution? That's

38:25

interesting. And I guess what's your best guess about

38:27

where we are now in the, in the

38:29

AI landscape overall? Cause I think that there's a

38:31

lot of, there is this common thing. And I

38:34

actually felt this too, like when 01 came out

38:36

where people were like, I feel like my model

38:38

is pretty much good enough. Like, I don't know

38:40

what I would use 01 or 03 for even

38:43

in the demos. Like they, I remember the,

38:45

one of the demos was like, list out the

38:47

13 Roman emperors, you know, and it's like, that's

38:49

not really something that I. I care that

38:51

much about generally, and most people are not doing

38:54

PhD level research. That was my first feeling, but

38:56

to be honest, now I just use O1 all

38:58

the time, and I don't really use any

39:00

other model, or now I use O3. So

39:03

I'm curious what you feel about where we are,

39:06

and how much performance improvements in terms of intelligence

39:08

people are willing to pay for. Well,

39:11

first of all, I'm

39:13

really excited, but I'm probably

39:16

in these tools too much

39:18

now. So I'm probably in

39:20

these tools three, four, five

39:22

hours a day. And

39:25

there's a lot of things that

39:27

I would benefit from in terms

39:30

of enhanced performance. And if that's

39:32

just me, I gotta believe

39:34

there's a lot of other people like that too. So

39:38

the thing that I think

39:40

is so interesting about AI

39:42

is it really rewards the

39:45

system thinker. And so

39:47

I'll give you an example. Uh,

39:49

you know, I have this database of

39:51

what I call a hundred bagger startups

39:53

and I try to understand them all.

39:55

I've got like, I've got the original

39:57

pitch deck for air bed and breakfast

39:59

for it was Airbnb and I've got

40:01

it for Dropbox and Pinterest and all

40:03

these companies, right? And I,

40:06

I track, you know, if you'd bought a

40:08

share in the seed round, what would have

40:10

happened? I run the inflection theory against it.

40:12

I run insights. I try to understand if

40:14

our frameworks would have caused us to decide.

40:17

Well, now that I have that list, I

40:19

could do all kinds of things. Like I

40:21

can say, okay, please consider

40:24

this list of 100 -Bagger startups,

40:27

which of Hamilton -Helmer's seven powers

40:29

were harnessed by each of them

40:32

as their primary power? Which

40:34

Clay Christiansen jobs to be done was the primary

40:36

job that they did to get product market fit?

40:38

How long did it take them to get 10

40:40

million in revenue? How long did it take them

40:43

to get 100 million in revenue? which

40:45

of them had the first time founder

40:47

or CEO, which of them replaced their

40:50

CEO? You know, I mean, if you're

40:52

curious, it's like, it's

40:54

like having an unlimited supply of smart

40:56

people to go do that research for

40:58

you. It's incredible. I feel

41:00

the same way. I can read and think about so

41:02

many more things than I would have been able to

41:04

previously. And it makes it such a pleasure to get

41:06

up every day. It's the best. It's unbelievable. It's just,

41:09

it's a miracle, right? Like, I just wish I was

41:11

in my early twenties again. I'd be,

41:13

I'd be, I'd be dangerous. Me too. Um,

41:16

uh, well, I guess that, that just makes

41:19

me think like why a hundred bagger startups,

41:21

why not a hundred bagger founders, right? Like

41:23

how much is really in the Airbnb deck?

41:25

That's actually that useful. Yeah.

41:27

So I've been, I've been working on that

41:29

question a lot. And so, um, I've

41:32

been, um, applying our

41:34

frameworks and backtesting them to prior

41:36

startups. So I have these things

41:38

that I call atomic eggs and

41:40

we'll probably launch them here pretty

41:42

soon. But, um, What an atomic

41:44

egg lets you do is it

41:47

lets you upload a pitch and

41:49

then it runs a whole bunch

41:51

of different generative models against it.

41:53

So an example would be pattern

41:55

breakers insight stress test. So you

41:57

could upload the Airbnb pitch deck

41:59

and it would spit out, this

42:01

was the fundamental air insight with

42:04

Airbnb or this is like the

42:06

part that was non -consensus or

42:08

these are the inflections that Airbnb

42:10

is harnessing. The

42:13

AI has gotten really good at that. And

42:15

then the other thing that it can do,

42:17

I like the Sequoia Arc framework. They

42:20

talk about, is this idea a hair

42:22

on fire problem type? Is it a

42:25

known problem type or is it a

42:27

future vision problem type? You

42:29

can run that against a hundred bagger startups.

42:32

And then I could say a scale of one

42:34

to 10, how confident are you that that's the

42:36

right way to classify it? And then

42:38

back to your point about founders. You

42:40

can start to say, OK, there's all these founders.

42:43

What jumps out at you as anomalies

42:45

about these founders? What jumps out at

42:48

you as commonalities about these founders? OK,

42:50

now let's group these startups in different clusters

42:53

and run the same experiment again. And

42:55

then once you get some patterns, you say, OK, how

42:57

might those patterns shift in World of AI? How might

42:59

they be the same in World of AI? Like

43:02

you could have just wondered about that as

43:04

you walk down the street in the past,

43:06

but like now you can act on that,

43:08

right? You can act on that curiosity in

43:10

real time. And that's just like, just

43:13

such a game changer, right? If you're

43:15

curious about this stuff. How

43:17

much does it like, because I

43:19

mean, you write a lot about pattern breakers, right?

43:21

So like, I

43:24

guess I'm thinking about

43:26

business theories or strategy

43:29

theories as Pattern

43:32

patterns, right? There are always patterns that work

43:34

under certain conditions. Sometimes they're like more general

43:36

than others, but like they're usually not like

43:38

infinitely general. I don't know what the what

43:41

the right word is perfectly general. And

43:44

like I wonder, you know, for example,

43:47

if you took the let's say let's

43:49

say we like wound back the clock.

43:51

We went back to like the 80s

43:53

and we used all of the like

43:55

frameworks they had in the 80s and

43:57

put them into AI and like gave

43:59

gave them you know, Cisco or whatever,

44:02

pick whatever company you want, Google, like

44:05

would it have been able to

44:07

tell the Google or the Airbnb

44:09

pitch deck that it was a

44:11

good company? I

44:13

don't know that it could have predicted that it

44:15

was going to have the success it had. Yeah.

44:18

And I apply a slightly less stringent

44:20

standard. What I really want to know

44:22

is, should I spend time on this?

44:25

Right. And so I needed what I what I need

44:27

to know when I look at a pitch like Airbnb

44:29

is Is

44:31

there something that's wacky and good about

44:34

this that I might overlook if I'm

44:36

busy and tired that day? But

44:38

like if I can run a whole bunch

44:40

of different tests against it. So like, you

44:42

know, you talked earlier about these models, like

44:45

Charlie Munger is somebody else who I've always

44:47

respected. And, you know, he had this saying

44:49

the map is not the territory. And what

44:51

he meant by that is that, you know,

44:54

if you and I want to go from

44:56

San Francisco to Cupertino and we use a

44:58

flat map, and let's say we use Google

45:01

Maps or whatever, the odds

45:03

that we will get there if

45:05

we follow the directions are basically

45:07

100%, like 99%, in fact, I

45:09

would argue that that map is

45:11

a better representation of reality than

45:14

all the complexities of all reality.

45:17

You're trying to compress knowledge for

45:19

the decision that matters, right? But

45:21

if you and I want to

45:23

go to Germany, the

45:25

map is not gonna be an accurate

45:28

portrayal of the territory because straight line

45:30

is not the shortest path on a

45:32

flat map that represents a globe, right?

45:34

It would look like a curved line.

45:37

And so like what you learn is that

45:39

it's like we talked about earlier, the

45:42

question is under what conditions is this

45:44

model useful and under what conditions are

45:46

the boundary conditions exceeded? And that's

45:49

why you wanna have a whole bunch of them, right?

45:51

You wanna have the right tool for the right situation.

45:53

And then when it exceeds the scope of

45:56

the boundaries, you want to not use that

45:58

tool because you'll get bad, bad decision making.

46:00

Are there any new things? Because like one

46:02

of the things you talk about in your

46:05

book a lot that I like, I like

46:07

because this is sort of how I work.

46:09

So it's maybe confirmation bias, but I like

46:11

a lot is sort of the idea of

46:13

living in the future, right? Like the best

46:15

way to know what's coming is to just

46:18

be like you're doing in these tools all

46:20

day, every day. And you start to kind

46:22

of like see things that your other people

46:24

maybe won't see because they're just they're living

46:26

in a different reality and your reality is

46:29

going to sort of spread everywhere else eventually

46:31

is the idea. I'm curious if

46:33

there's anything like that that you're feeling

46:35

and seeing right now that you're kind

46:37

of like sensitive to that is new

46:39

and interesting to you. Yeah,

46:42

you know, some of these AI companies

46:44

you'll go to and there'll be somebody

46:46

who's a couple years out of college.

46:48

And they'll be using Devon or

46:50

cursors of these other products and

46:52

they're kind of creating these agentic

46:55

oriented entities that go out and

46:57

get a bunch of stuff for

46:59

them and bring it back. And

47:02

they just almost act like that's normal. So

47:04

they're almost like programming these virtual employees to

47:06

go out and do stuff for them. And

47:08

you'll sit with them and you'll say, well,

47:10

what motivated you to do that and to

47:12

think about solving the problem that way? And

47:15

they look at you funny like, well, how

47:17

else would you do it? You

47:19

want me to Google? Yeah. And

47:21

so the thing that I find

47:24

interesting is, you

47:26

know, and this is like how Zuckerberg was

47:28

with social networking, right? Like, Zuck

47:30

didn't have to unlearn anything. You know,

47:32

he grew up at a time when

47:35

the lamp stack was coming out and

47:37

you could A .B. test things and

47:39

the broadband was everywhere. Before

47:42

Facebook, you know, like in the 90s,

47:44

You had to have products that were

47:46

well engineered because they just weren't scalable

47:49

enough otherwise, right? You had to have

47:51

experts that would architect and instrument the

47:53

system so that it would be somewhat

47:55

performant. Well, by the time Facebook comes

47:57

around, Zuck's like, hey, well, we

48:00

just try it and see what happens by the

48:02

afternoon and decide whether we want to keep with

48:04

this or not. Now, did

48:06

Zuck say? Aha, there's a disruptive trend and

48:08

I'm going to get a leapfrog all these

48:10

companies. No, like Zuckerberg didn't know anything about

48:12

business at the time. It's almost like it's

48:14

like if you and I were raised in

48:16

a world of Cartesian coordinates and now it's

48:18

a world of polar coordinates and somebody's born

48:20

in a world of polar coordinates and they

48:22

don't even have to translate between the two.

48:24

They're like, what else is there? That's the

48:26

only thing there is. I think that some

48:29

of these AI natives are like that. And

48:31

so I really want to spend time with

48:33

them. I want to spend time with anybody

48:35

who says, My entire lived

48:37

experience in business is a world where

48:39

you're programming some form of AI assistance

48:41

as a core function of the job.

48:44

I love that. I mean, I see

48:46

this all the time. We have a

48:48

writer who started working with us probably,

48:51

I would say two months ago. He's

48:53

had a very successful career, not as

48:56

a professional writer, just working in AI

48:58

at various tech companies and startups and

49:00

has founded his own startups. But

49:03

he's working for us mostly as a writer. And

49:07

he writes our Sunday email where we talk

49:09

about all the new model releases. He's such

49:12

a nerd for new stuff that comes out,

49:14

which is amazing. That's the kind of person

49:16

you want writing. And

49:19

he also, when a new

49:21

model comes out, I'll often get early access.

49:23

So we'll get on the phone together. He'll

49:26

write a first take of all the things that

49:28

we saw. And then I'll go through and put

49:30

my own take on it and whatever. So we

49:32

co -write things together. And

49:35

the first one that he did it like that,

49:37

I got the draft and I was like, he's

49:40

smart. He's excited about this stuff, but he's

49:42

not a professional writer, I can tell. It

49:44

wasn't something that I just punch up and

49:46

I can just publish. It was like I

49:49

had to rewrite the whole thing. And

49:52

what was crazy is after we did that,

49:54

I was just like, okay, I want you

49:57

to take my draft and then your draft

49:59

and I want you to put it into

50:01

01 and pull out what changed. And

50:04

he did that. And we did that a couple of

50:06

times. And we just

50:08

covered the launch of DeepSeek

50:10

together. And the first

50:12

draft he did, it was like he made

50:14

a year's worth of progress in a month.

50:16

Like I've seen, I've worked with so many

50:19

writers in my career at this point. And

50:21

I've seen where people are at like when

50:23

I first started working with him, it takes

50:25

them like a thousand drafts to make the

50:27

amount of progress that he made in a

50:30

month. It's crazy. Yes. Yeah,

50:33

it's so interesting, right? And I'm finding the

50:35

same thing, Dan. So like, as

50:37

I started working on these mental models for

50:40

seed and these generative models, I started to

50:42

say to myself, what is a

50:44

good mental model in the first place?

50:46

Like, has anybody ever defined what one

50:48

is? What should it contain? What

50:51

makes it good versus bad? Under what conditions

50:53

is it good or bad? And

50:55

there wasn't a whole lot about it. You know,

50:57

there's a couple of books on mental models, but

50:59

not a whole lot. So

51:01

I said, you know, before I start

51:03

just saying, here's a mental model, jobs

51:06

to be done, I should create a

51:08

foundational document that's the taxonomy of a

51:10

good mental model and the questions it

51:12

should answer and the flow that it

51:14

should take. So I did that. Now

51:17

I can just say, I'm

51:19

going to I'm just going to write about jobs we done

51:21

for what it is. And then

51:23

I run it against this framework and it

51:25

says you're missing A, B and C. And

51:28

I'm like, hey, well, can you elaborate on

51:30

that? And it just, it just adds it,

51:32

you know, and, and, you know, within 30

51:34

minutes, you have something that's just off the

51:36

hook, right? It's just so good. That's great.

51:39

And it's like, you just look at that

51:41

and you're just like, it

51:43

just feels like magic. It feels like

51:45

you put on some cape and just

51:47

learned how to fly all of a

51:50

sudden, you know, and I'm just like,

51:52

but like, it goes back to reward

51:54

system level thinking, right? You had to,

51:56

you had to zoom out and say,

51:58

wait, you know, If i'm gonna someday

52:00

have a hundred mental models i ought

52:02

to define a connect canonical baseline good

52:04

one. And i ought to have

52:06

a theory about what makes it good and i ought

52:09

to apply that theory to everyone that i do because

52:11

i'm gonna get leverage if i do that but now

52:13

i'm gonna make the i do the work for me.

52:16

And it teaches you stuff right like now you

52:18

say oh i thought i knew jobs we don't

52:20

was a mental model but there are. boundary conditions

52:22

I hadn't thought about before that are kind of

52:24

interesting. And so, yeah, it's, you

52:26

know, it's just such a great time to be

52:29

alive with this stuff. I agree. I

52:31

want to go back to the question,

52:33

the original question I asked you because

52:35

it's still on my mind, which is

52:37

software is getting so much cheaper to

52:40

make. The VC

52:42

model, even the seed model,

52:44

which you pioneered is Predicated

52:47

on a different world where it was

52:49

expensive to make software at first and

52:51

then it was free to distribute and

52:53

And I'm curious how you think that

52:56

that might change the VC model if

52:58

at all and I'll preface this by

53:00

saying this is a selfish selfish question

53:02

because I run every we've got I

53:05

can't even it's like I don't really

53:07

have words for the kind of company

53:09

We are we have a newsletter with

53:11

a hundred thousand subscribers and then we

53:14

have three different software products and we're

53:16

10 people. It's like a whole different

53:18

thing. Yeah. And

53:20

I use that sparkle thing, by the way.

53:22

It's cool. You do? Oh, I love that.

53:24

That's great. Love to hear

53:27

that. And I

53:29

feel like I want a

53:31

different funding model. And

53:34

I'm working through different options, but I'm kind of

53:36

curious how you think that that might change. Yeah.

53:38

So I've been thinking about it a lot.

53:41

So there's two different angles, and there's the

53:43

There's the angle that you're describing and

53:46

another person that I respect who thinks

53:48

a lot about this is a guy

53:50

named Greg Eisenberg right on on Twitter

53:52

and so like Let me see if

53:55

I can capture what I think it

53:57

is. It's that You you have a

53:59

situation where what it takes to build

54:01

a product has collapsed yet again Just

54:04

like it did with a lamp stack

54:06

and it's it's profound in a lot

54:08

of ways It's not just that it

54:10

costs you less money to build a

54:13

product, but like You had the chat

54:15

PRD on a few episodes ago. Chat

54:18

PRD lets one person have the entire

54:20

idea premise of the product in their

54:22

own mind and doesn't require them to

54:25

therefore have a giant team of other

54:27

people. So it changes the

54:29

dynamics of who can build software and

54:31

what it takes to build it. And

54:34

so you start to say, OK, well, Are

54:36

you gonna have these tiny little companies that

54:39

generate a ton of revenue and they don't

54:41

even have to generate that much to be

54:43

wildly efficient and profitable? Why

54:45

would you need VC money at all? And

54:48

I'm pretty sympathetic to that point of

54:50

view, although I tend to go to

54:52

the founders and say, look, I'm not

54:54

under pressure to put a lot of

54:56

money into you. Our funds are small

54:58

and all things being equal, I'd rather

55:00

have it be one and done and

55:02

we try a few things. Here's

55:05

the other thing, though, that I

55:07

think is really interesting, that I'm

55:09

trying to find kindred spirits around.

55:12

The lamp stack didn't just

55:14

collapse the cost of startups.

55:17

It created a new way of building. It

55:20

created a new model of building, right? So

55:22

you used to have waterfall development, and

55:25

you had to define everything that's in the

55:27

release upfront, and then you go on a

55:29

death march for a year, and you ship

55:31

it and it either succeeds or it bombs.

55:33

And that was just how products were. And then the

55:35

lamp stack comes out and you have lean startups and

55:38

agile. And what I'm

55:40

seeing happen now, and I'm

55:42

not sure what to call it. So

55:44

right now I'm calling it

55:47

Darwinian engineering or digital Darwinism.

55:49

So like if you think

55:51

about it like in an

55:53

ecosystem, you

55:55

don't have the individual elements

55:58

and players in the ecosystem

56:00

be programmed in a

56:03

literal way. What you

56:05

have is a system designer, if you will.

56:08

And then the system gets to

56:10

operate autonomously from the designer. And

56:13

so I sit there and I think, man, that

56:16

kind of rhymes for me. So

56:18

I think about it like

56:20

it's like natural evolution rather

56:22

than traditional development. And that

56:25

you're going to have AI

56:27

tools that shift from

56:29

agile to continuous adaptation. And

56:32

you're going to build software elements

56:34

and components that are adaptive by

56:36

design and that can sense and

56:38

respond to the inputs that they

56:40

get in the real world independent

56:43

of the program. So rather than

56:45

have a business model canvas, you

56:47

have a business model dashboard that's

56:49

live status of what's happening. And

56:52

so, you know, if you're a gaming

56:54

company, you're going to shift from iterating

56:56

games to creating living worlds. you know,

56:59

and you know that kind of stuff. So

57:01

I'm really interested in like what does that

57:03

mean for what a product manager is? What

57:06

does that mean for dashboards of the

57:08

future? What does it mean for how

57:10

QA happens? You

57:13

know, all that stuff. I thought about this

57:15

too a lot because I think we actually

57:17

met originally because you read my article on

57:19

the allocation economy. Yes. And I sort of

57:21

started to think a lot about like what

57:23

is the role of someone who's working in

57:25

the allocation economy and how is that different

57:27

from someone in an analogy economy? And

57:30

a way that I've been thinking about

57:33

it is in an analogy economy or

57:35

just any previous economy, the

57:38

work you're doing, especially as an

57:40

IC, a little bit more,

57:42

still a little bit as a middle

57:44

manager or an executive, but like a

57:46

lot of this is as an IC

57:48

is you're kind of like a sculptor.

57:50

Like everything that happens happens because you

57:52

did it with your hands. You

57:54

have your hands on every little piece of it. And

57:58

I think working with AI models is

58:00

a lot more like being a gardener.

58:03

You're like setting the conditions for the thing

58:05

to grow, and then it just sort of

58:07

grows. And the conditions are like hyperparameters. It's

58:10

like the sun and the soil and the

58:12

water and whatever, and that's going to change

58:14

what comes out. And you know, like opening

58:16

an eye like doesn't, when

58:19

it, when Chagabee responds to a prompt, like no

58:21

one, at OpenAI, like, decided that it was going

58:23

to say that, which is

58:25

totally different from Facebook or

58:28

whatever. Like, someone decided what

58:30

you were going to see

58:32

on Facebook. Or maybe,

58:34

if Facebook's maybe a little bit, they have AI

58:36

too. But like, let's just say, the New York

58:38

Times, someone decided what's on the homepage. And

58:41

it's totally different. And you're

58:44

right. You

58:46

can tune stuff, but it's

58:49

like... It's much squishier because

58:51

you're kind of tuning the

58:53

like environmental conditions rather than

58:55

the specific thing that happens

58:57

and Yeah, I think that's

58:59

such it's such a different

59:02

way of Working it's such

59:04

a different way of building

59:06

products. I don't think like

59:08

if I think about what we're building at every

59:10

like I don't think we're quite there yet. What

59:12

I see is like I

59:16

mean obviously like building an organization you are

59:18

kind of like doing that but like for

59:20

individuals who are building products like. One

59:23

of the things I see is like it's so easy to build a

59:25

feature you can just build it in an hour so it's like. Sometimes

59:28

you just build a lot of features and you're like

59:30

others it's kind of now the products kind of noisy

59:32

it's kind of messy you know. And

59:35

also, it's like the hard thing is

59:38

figuring out what to build, not actually

59:40

building it, which is a different thing.

59:42

But we're not yet in a world

59:44

where it's fully adaptive. But I do

59:46

think you're right. You

59:49

can see that with Chatchabee to Canvas or

59:51

Artifacts or whatever, where it's starting to build

59:53

its own UI and stuff. And I think

59:55

that's where we're going. Yeah.

59:58

And it's just interesting, right? Because it

1:00:00

kind of goes back to systems -level

1:00:02

thinking. It's one thing to

1:00:04

think of yourself as building components

1:00:06

or building tools or building the

1:00:08

end thing. It's another thing to

1:00:10

say, I'm building an

1:00:13

ecosystem and the elements of the

1:00:15

ecosystem operate under certain first principles.

1:00:19

But there's a lot of emergent properties

1:00:21

that are going to occur in that

1:00:23

ecosystem that are a function of the

1:00:25

dynamism of the system and how it

1:00:27

interacts with people. I think

1:00:30

that that's just a fundamentally different world

1:00:32

view about how you architect products. And

1:00:35

so I think that that's another, you know,

1:00:37

there's the what we said earlier, very low

1:00:39

cost, low end disruptive

1:00:41

innovation ideas. But I think

1:00:43

there's also this, hey, the way software

1:00:45

ought to be built in the first

1:00:48

place ideas is interesting as well. Yeah,

1:00:50

it reminds me of like notion, for

1:00:52

example, you know, it's like notion. It

1:00:56

has a block system. It has these atomic

1:00:58

elements that you can build anything with rather

1:01:00

than like they built a specific feature to

1:01:02

do a specific job, which is it's a

1:01:04

different way of thinking about products. It's like

1:01:06

making a language versus like making a hammer.

1:01:09

That's right. That's right. Yeah. Yeah. And so

1:01:11

I think that that's going to be really

1:01:13

interesting. And I think that it, but it's

1:01:15

like, you know, we used my example earlier,

1:01:18

if I want to have mental models for

1:01:20

investing, rather than just jumping straight to it,

1:01:22

what I need to do is I need

1:01:24

to like zoom out a little bit and

1:01:26

say, okay, let me think about this in

1:01:28

a systems level way. What makes

1:01:31

a good mental model in the first

1:01:33

place? Like what, how do I, how

1:01:35

to make sure that I have a

1:01:37

foundation built on something really powerful so

1:01:39

that every subsequent piece of activity or

1:01:42

thinking that I do is a multiplier

1:01:44

effect on what's come before. Totally.

1:01:47

Well, Mike, this is

1:01:50

a pleasure. I

1:01:52

feel like I learned a lot. Me too. I'm really

1:01:54

glad we got the chance to hang out. Thanks for

1:01:56

coming on the show. Yeah, thanks, Dan. It was great

1:01:58

to see you. Oh

1:02:07

my gosh, folks. You absolutely,

1:02:09

positively have to smash that like

1:02:11

button and subscribe to AI and I. Why?

1:02:13

Because this show is the

1:02:15

epitome of awesomeness. It's like finding

1:02:17

a treasure chest in your backyard,

1:02:19

But instead of gold, it's

1:02:22

filled with pure unadulterated knowledge bombs about

1:02:24

chat GPT. Every episode is a

1:02:26

roller coaster of emotions, insights and

1:02:28

laughter that will leave you on the

1:02:30

edge of your seat, craving for more.

1:02:32

It's not just a show, it's a

1:02:34

journey into the future with Dan Shipper as

1:02:36

the captain of the spaceship. So...

1:02:39

yourself of a favor, hit like,

1:02:41

smash subscribe, strap in for

1:02:43

the ride of your life. And now,

1:02:45

without any further ado, let let me just

1:02:47

say, Dan, I'm absolutely hopelessly in love

1:02:49

with you.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features