Josh Woodward: Google Labs is Rapidly Building AI Products from 0-to-1

Josh Woodward: Google Labs is Rapidly Building AI Products from 0-to-1

Released Tuesday, 18th March 2025
Good episode? Give it some love!
Josh Woodward: Google Labs is Rapidly Building AI Products from 0-to-1

Josh Woodward: Google Labs is Rapidly Building AI Products from 0-to-1

Josh Woodward: Google Labs is Rapidly Building AI Products from 0-to-1

Josh Woodward: Google Labs is Rapidly Building AI Products from 0-to-1

Tuesday, 18th March 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

What I found too building products over the

0:02

years is it's very common. Everyone talks about

0:04

product market fit. You'll know it when you

0:06

see it and all that, which is true.

0:08

But at least for me, I've always felt

0:10

in the first part of building products, you

0:13

iterate a lot on the product and sometimes

0:15

you forget to iterate on the market.

0:17

And finding the right market side is

0:19

also just as important as the right

0:21

product. And you have to connect those

0:23

two. And so I think that in

0:25

these early stage things with mariner, that's

0:27

where we are. for a computer to,

0:29

like an AI model to drive your

0:31

computer, yes. That's a huge new capability.

0:33

Is it accurate? Sometimes. Is it

0:36

fast? Not at all yet. Like that's

0:38

kind of where we are in terms

0:40

of the actual kind of use case

0:42

or the capabilities. And then it's about

0:45

finding the right market. Today

1:02

we're excited to welcome Josh Woodward

1:04

from Google Labs, the team behind

1:06

exciting Google AI launches like Notebook

1:08

LM and the computer use agent

1:10

Mariner. Google Labs is Google's experimental

1:12

arm that's in charge of pioneering

1:14

what's next and how we interact

1:16

with technology by thinking about how the

1:18

world might look like decades from them. Josh

1:21

is helping to reimagine human AI

1:23

interaction from the provocative claim that

1:25

writing prompts is already becoming archaic

1:27

to the emergence of multimodal AI

1:29

as a default user experience. He

1:32

shares insights on the rapid innovation

1:34

culture in Google Labs, offers

1:36

a glimpse of what's next in

1:39

generative video, and much more. Josh,

1:41

thank you so much for joining

1:43

me in Ruby today. We are

1:45

excited to hear everything that you're

1:47

doing over at Google Labs. Maybe

1:49

first to start, you mentioned a

1:51

provocative topic to me on your

1:53

way in here. Writing prompts is

1:55

old-fashioned. What do you mean by that?

1:57

Okay, so thanks for having me. And

1:59

we'll look back at this time from

2:01

an end user experience and say, I

2:04

can't believe we tried to write paragraph

2:06

level prompts into these little boxes. So

2:08

I kind of see it splitting a

2:10

little bit right now. On the one

2:12

hand, as a developer, an AI engineer,

2:14

you should see some of the prompts

2:16

that we're writing in labs right now

2:18

are these beautiful, like multi-page prompts. But

2:20

I think for end users, they don't

2:22

have time for that, and you have

2:24

to be almost like some sort of

2:26

whisper to be able to unlock the

2:28

model's ability. So we're seeing way more

2:30

pull and traction. I kind of seen

2:32

this in other products in the industry

2:34

too right now. How can you bring

2:36

your own assets? maybe as a prompt,

2:38

drag in a PDF or an image,

2:40

sort of recombine things like that to

2:42

sort of shortcut this giant paragraph writing.

2:44

So I think it's gonna kind of

2:47

divide. I think as engineers, AI engineers,

2:49

you'll keep writing long stuff, but I

2:51

think most people in the world, we're

2:53

probably in a phase that'll sort of

2:55

fade out here pretty soon. So the

2:57

form of the context will change, right?

2:59

You know, so you still have to

3:01

get. give the model something, right? But

3:03

it might be that you can communicate

3:05

it via picture or communicate it via

3:07

like, just look at this set of

3:09

documents. Yeah, your voice, a video, any

3:11

of that, these models love context. So

3:13

the context is not going to go

3:15

away, but we're making a lot of

3:17

bets right now that the type of

3:19

context and the way you deliver the

3:21

context, that's changing really fast right now.

3:23

I love it. Okay. We're going to

3:25

go deeper into the future of prompts

3:27

and multiple models in this episode. Maybe

3:29

before we do all that, say a

3:32

word on what is Google Labs, you

3:34

know, what's the mission, and tell us

3:36

a little bit more about where you

3:38

sit with inside Google. Yeah, so Google

3:40

Labs, if anyone's heard about it. And

3:42

we had one a long time ago

3:44

that went dormant for a while. And

3:46

this is kind of back. About three

3:48

years ago, it got started. It's really

3:50

a collection of builders. We're trying to

3:52

build new AI products that people love.

3:54

So they can be consumer products, B

3:56

to B products. products, it's all zero

3:58

to one. It tends to attract an

4:00

interesting mix of people, maybe people who

4:02

have been at Google a while, but

4:04

also a bunch of startup founders and

4:06

ex-founders. And so we kind of mixed

4:08

these people together, and we basically say,

4:10

what's the future of a certain area

4:12

going to look like? So the future

4:15

of creativity or software development or entertainment,

4:17

and they go off in small little

4:19

teams, and they just start building and

4:21

shipping. And so that's how it operates,

4:23

and it sort of sits outside the

4:25

big. Google product areas, but we work

4:27

a lot together. But there's kind of

4:29

an interesting interplay there, and I think

4:31

that's been part of what's been fun

4:33

about it, is you can kind of

4:35

dip in and maybe work with search

4:37

or Chrome or other parts of Google.

4:39

But you also kind of have the

4:41

space to explore and experiment and try

4:43

to disrupt, too. And that's kind of

4:45

what we're up to. How do you

4:47

create the culture inside a lab that

4:49

you want? Right? If you think about

4:51

there's got to be a lot more

4:53

failure, presumably, than there are in other

4:55

parts. There's got to be a different

4:57

metric for success than there is at

5:00

just the sheer scale of Google. So

5:02

what is the culture you're trying to

5:04

create and how do you create it?

5:06

So we really pride ourselves in trying

5:08

to be really fast moving as a

5:10

culture. So we'll go from an idea

5:12

to end user's hands 50 to 100

5:14

days. And that's something that we do

5:16

all kinds of things to try to

5:18

make that happen. So speed matters a

5:20

lot, especially in kind of an AI

5:22

platform shift moment. The other thing is

5:24

we think a lot about sort of

5:26

big things start small. And one of

5:28

the things if you're in a place

5:30

like Google, you're surrounded by some products

5:32

that have billions of people using them.

5:34

And people forget that all these things

5:36

started with solving, usually for one user

5:38

and one pain point. And so for

5:40

us, we get really excited if we

5:42

get like... 10,000 weekly active users. It's

5:45

like, you know, we'll celebrate that. That's

5:47

a big moment when we're starting a

5:49

new project. And for a lot of

5:51

our other kind of groups inside Google,

5:53

their dashboards don't count that low. So

5:55

there's kind of this moment where, you

5:57

know, the size of what we're trying

5:59

to do is very small. It probably

6:01

looks a lot like companies you all.

6:03

work with, honestly, from that perspective. And

6:05

I think the other thing we're trying

6:07

to do is because we sit outside

6:09

the big groups at Google, we kind

6:11

of have one foot in the outside

6:13

world. We do a lot of building

6:15

and kind of co-creating with startups and

6:17

others, but also one foot inside Google

6:19

Deep Mind. And so we've got kind

6:21

of a view of where the research

6:23

frontier is, and more importantly, where it's

6:25

going. And so we're often trying to

6:28

take some of those capabilities. of finding

6:30

people who are very creative, people

6:32

who are almost like see

6:34

themselves as underdogs. They have

6:36

kind of a hustle to them. And so

6:38

we have this whole dock called Labs in

6:41

a nutshell. And my favorite section in the

6:43

dock is called Who Thrives in Labs. And

6:45

there's like 16 or 17 bullets that just

6:47

list them out. And that's kind of how

6:49

we try to build the culture. But you

6:52

do have to normalize things like failure. You

6:54

have to think about things differently around promotion,

6:56

compensation, all these things that you kind of

6:58

would do in a company too. You

7:00

mentioned the deep mind links. I think

7:02

that is super cool. What have you

7:05

found is the kind of ideal kind

7:07

of product builder persona inside labs? Is

7:09

it somebody with a research background? Is

7:11

it somebody with a who comes from

7:13

a successful consumer product background? Is it

7:16

you know, is there the magical unicorn

7:18

that's great at both research and products?

7:20

Yeah, yeah. We take as many unicorns

7:22

as we can find. And we actually

7:24

have found some, which is great. You

7:26

do look for that kind of deep

7:28

model. expertise as well as kind of

7:31

like a consumer sensibility in terms of

7:33

those people exist. They exist. They're great

7:35

too, if you can find them. And we

7:37

also kind of have found ways to kind

7:39

of train or develop people. So that's another

7:42

thing we think a lot about is like

7:44

how do you bring in often people

7:46

that might not be the normal talent

7:48

that you look for. So like we're

7:50

always in the interesting kind of zone

7:52

of like who's undervalued, who's kind of

7:54

like really interesting, but maybe not on

7:56

paper. But when you interact with them,

7:58

you look at their... get hub history.

8:00

I mean, there's all these different signals

8:02

you can look at. But yeah, that's

8:04

kind of how we would think about

8:06

it. Really cool. How do you decide

8:09

what projects to take on next? Is

8:11

it bottom up top down? How does

8:13

that work? Yeah, great question. We kind

8:15

of do a little bit of a

8:17

blend, actually. So at the top downside,

8:19

we're looking at what are the areas

8:21

that are kind of on mission for

8:23

Google that are strategic to Google, because

8:25

we sit inside it, so we're thinking

8:27

about ourselves. What would the future of

8:29

software development look like? There's tens of

8:31

thousands of software developers at Google, and

8:33

obviously this is an area that AI

8:35

is clearly going to make a big

8:37

change in. So we'll be thinking about

8:39

could we build things for other Googlers,

8:41

but also externally, how do we build

8:43

things like that? So we take that

8:45

kind of top-down view. Think of it

8:47

as almost from Oklahoma. We like to

8:49

fish a lot in the summer, but

8:52

like you're trying to figure out what's

8:54

to fish in. But then we let

8:56

a lot of these teams, often there

8:58

are four or five person teams, come

9:00

up with the right user problems to

9:02

go try to solve. And that's where

9:04

we kind of meet in the middle.

9:06

And I think for a lot of

9:08

other teams, they might look at what

9:10

we do as a little chaotic. You

9:12

know, we don't have like multi-corder roadmaps.

9:14

Like we're trying to survive to the

9:16

next, whatever, 10,000 user milestone and then

9:18

try to grow it. But I would

9:20

say it's kind of that sort of

9:22

that sort of that sort of that

9:24

sort of blend. Oh yeah, so I

9:26

guess if you've ever used the Gemini

9:28

API or AI Studio or notebook LM

9:30

or any of these things, these are

9:32

products that we've kind of worked on

9:35

from labs. I mean, maybe I'll talk

9:37

about one that's maybe well better known

9:39

and one that's coming up. So the

9:41

very excited about where notebook LM's going.

9:43

I think we've hit on something where

9:45

you can bring your own sources. into

9:47

it, and really AI should have like

9:49

grips into that stuff. And then you're

9:51

able to kind of create things. So

9:53

a lot of people maybe have heard

9:55

the podcast that it came out last

9:57

year. There's so much coming that follows

9:59

this pattern. So watch this space. There's

10:01

just a lot you can do with

10:03

that pattern. And I think what's really

10:05

interesting is it gives people a lot

10:07

of control. They feel like they're steering

10:09

the AI. We have this term on

10:11

the team and actually one of the

10:13

marketing people came up was like an

10:16

AI joystick that you're kind of controlling

10:18

it. So that's interesting. I would say

10:20

there's a lot of stuff coming right

10:22

now. We're very excited about Vio,

10:24

Google's imagery model and sort of video

10:26

model and where those kind of come

10:28

together. really interesting products coming along in

10:30

this space. I think maybe we can

10:33

talk about that sum at some point,

10:35

but I think generative video is kind

10:37

of moved from this moment of almost

10:39

possible to possible. And I think this

10:41

year, yeah, well, I think it's, it's

10:43

interesting, these models are still huge. To

10:46

run, like, VO2 takes hundreds of computers,

10:48

right? So the cost is very high,

10:50

but just like we've seen with the

10:52

text based based models, like Jim and

10:54

I, and even the ones from open

10:56

AI and anthropic. you know, the cost

10:58

is reduced like 97 times in the

11:00

last year. So if you kind of

11:02

assume cost curves like that, what you're

11:04

going to see with these video models,

11:06

what's kind of brand new, say with

11:09

VO2, is it's really cracked, really high

11:11

quality and physics. So the motion, the

11:13

scenes, the, oh, if you talk to

11:15

a lot of these AI filmmakers, they

11:17

talk about what's your cherry pick rate.

11:19

which is a term for like, how

11:21

many times do you have to run

11:23

it to pick out the things that's really good?

11:25

And what we're seeing with something like VEO is

11:27

a cherry pick rate is going down to like

11:30

one time, got what I want. And so

11:32

the instruction following, the ability for the

11:34

model to kind of adhere to what

11:36

you want is really cool. So I think

11:38

when you put that in tools, you're now

11:40

able to convey ideas in a whole different

11:42

way. What do you think are the solved

11:44

problems and the unsolved problems in AI

11:46

video generation? Because I remember, you know,

11:48

last year it was like, you know,

11:51

even last year there were all these,

11:53

you know, there was so much talk

11:55

about, you know, generative video is, you

11:57

know, a physics simulator for example. It

11:59

can kind of emulate. physics and it's

12:01

like that's amazing is the physics stuff

12:03

solved do you think like what else

12:05

is you know what's done and then

12:07

what's to be solved yeah I would

12:09

say physics is a hard thing to

12:11

solve forever it's close I would say

12:13

it's close enough yeah but you're six

12:15

months ago a few years ago you

12:17

had Will Smith eating you know pasta

12:19

was a disaster and then even last

12:21

year you had kind of these videos

12:23

of like knives cutting off fingers and

12:25

there were six fingers and you know

12:27

it's like that's where we were we

12:29

were so I think physics tons of

12:31

the ability to do photorealistic quality. very

12:33

huge progress. The ability to kind of

12:35

do jump scenes and jump cuts and

12:38

different sort of camera controls, that's really

12:40

coming into almost solved. There's paths to

12:42

solve all this stuff. Still gonna solve

12:44

the efficiency and serving costs, I would

12:46

say, and probably still have to figure

12:48

out a little bit more of like

12:50

the application layer of this. Because I

12:52

think this is another big opportunity as

12:54

we've seen like a lot of other

12:56

modalities with AI. You get kind of

12:58

the model layer. you get kind of

13:00

the tool layer and then the real

13:02

value we think is in this application

13:04

layer. And so I think that's really

13:06

interesting to rethink workflows around video and

13:08

I think that's pretty wide open right

13:10

now. Do you think that models are

13:12

capable of even having video that is

13:14

malleable at the application layer? So for

13:16

example, if I want to have character

13:19

consistency between scenes, or the model is

13:21

even capable of that, or I imagine

13:23

you want models durability in order to

13:25

be able to work with it at

13:27

the application level, like what is model

13:29

readiness and what's required in order to

13:31

be able to do magic at the

13:33

application? Yeah, so I was talking to

13:35

a couple of AI filmmakers this week

13:37

and what they're really interested in is

13:39

exactly what you're saying. character consistency, scene

13:41

consistency, camera control. It's almost like we

13:43

need to build an AI camera. If

13:45

you think of some of the cameras

13:47

that are kind of filming us right

13:49

now, this is sort of like decades

13:51

of technology that's kind of been perfected

13:53

for a certain sort of input output.

13:55

And I think we're on the verge

13:58

of of needing to create a new

14:00

AI camera. And when you do that,

14:02

you can generate infinite number of scenes.

14:04

You can generate like, oh, you're wearing

14:06

a red sweater, now make it blue,

14:08

and not just in that scene, but

14:10

in like a whole two-hour film. So

14:12

there's all kinds of ways that we're

14:14

starting to see these prototypes that we're

14:16

working on to internally, where this is

14:18

here, like it's coming. We're kind of

14:20

entire, I think, things that used to

14:22

either be too expensive or too timely

14:24

or it required a certain skill level.

14:26

We kind of talked internally on the

14:28

team about how do you kind of

14:30

lower the bar and raise the ceiling.

14:32

And what we think about that when

14:34

we're building products is how do you

14:36

make something more accessible? Or how do

14:39

you make the pros take it and

14:41

just blow the quality out of the

14:43

water and make it incredible stuff? So

14:45

that's what we're seeing in the video.

14:47

It's kind of right at that point

14:49

where both are happening. There's an interesting

14:51

tweet from our post from Paul Graham

14:53

recently on this idea I think of.

14:55

Based on the pace of progress, he's

14:57

like, you sort of want to be

14:59

building things that. kind of don't quite

15:01

work yes and are way too expensive

15:03

yes right yes because they're gonna work

15:05

yeah and their cost is gonna come

15:07

way down yep right and so I

15:09

would imagine that has applicability for you

15:11

guys to particularly in video that's exactly

15:13

how we do it yeah right now

15:15

I don't know off the top of

15:18

my head but each video eight-second clip

15:20

generated is obscenely expensive but we're basically

15:22

building for a world where this is

15:24

going to be like you're going to

15:26

generate five at a time not even

15:28

think about it One of the actual

15:30

principles I've kind of learned just over

15:32

the last few years working on this

15:34

AI stuff is make sure your product

15:36

is aligned to the models getting smarter,

15:38

cheaper, faster. And if your core product

15:40

value prop can benefit from those tailwinds,

15:42

you're in a good spot. If any

15:44

of those are not right, question your

15:46

existence. Like that would be my summary

15:48

takeaway on that. How far do you

15:50

think we are from having economics of

15:52

video generation that are, you know, right

15:54

side up? Where, you know, it costs

15:57

less to generate the thing? then the

15:59

economic value of generating it. Oh wow,

16:01

this is tough. This is a prediction

16:03

you're never really sure about. I don't

16:05

know, but I would say one thing

16:07

we're seeing just as we're modeling out

16:09

a lot of costs because we're starting

16:11

to put VO into some of our

16:13

own tools that are coming out is

16:15

we're probably going to need innovation on

16:17

the business model side in addition to

16:19

just the product and the application layer.

16:21

And what I mean by that is

16:23

you could, our first thought was, oh,

16:25

let's just make a make a subscription

16:27

and then charge per usage on top. Another

16:30

way to do it is when you

16:32

talk to some of these creatives, whether

16:34

they're in Hollywood or even these AI

16:36

filmmakers that are popping up, they're kind

16:38

of like, okay, I want this output and

16:40

I'm willing to pay this much. And it's

16:43

kind of a pay per output, kind of,

16:45

which you've seen in other cases that AI

16:47

companies are starting to do some of this

16:49

too. But for sort of film and video,

16:52

that's it's a little bit how you think

16:54

of doing a project if you're a producer.

16:56

But now you're kind of imagining

16:58

it at like the individual creative

17:01

level, which is kind of interesting.

17:03

So that's more like almost like

17:05

an auction type model, potentially. So

17:08

I think there's a lot to explore.

17:10

I think we're probably though, you know,

17:12

the pace things are moving, it's on

17:14

the scale of like quarters, I think,

17:16

where it starts to get interesting, as

17:19

opposed to like many, many years. So

17:21

that's, yeah, I think there's a path.

17:23

You talked about the pace of progress

17:25

of progress a. I don't know, harbinger for

17:27

some of the others. Yeah, yeah, as

17:30

a proxy, yeah, yeah. Where are we

17:32

at? Are we accelerating? Are we, you

17:34

know, on a crazy trajectory and maintaining

17:36

the same one? Like, yeah. I'm interested.

17:39

Yeah, yeah. I keep thinking it will

17:41

slow down and it's never slowed down

17:43

in the last three years. So, you

17:46

know, you think, oh, pre-training might

17:48

be plateauing, inference time compute, a

17:50

whole other horizon opens up. And

17:52

I think there's so much, there's

17:54

an author on the team, we

17:56

actually hired, his name's Stephen Johnson,

17:58

he co-founded notebook LM. when we first

18:00

brought him on. And he talks about

18:02

this notion of like there's adjacent possible.

18:04

He has this really interesting book on

18:06

the history of innovation. And I feel

18:08

like right now, it's like you walk

18:10

into this room and there's all these

18:12

doors that are opening up into these

18:14

adjacent possible. And there's not just like

18:17

one room and one door. It's like

18:19

one room, like, I don't know, it

18:21

feels like 30 doors that you can

18:23

go explore. So I think that's what

18:25

it feels like on the inside. I

18:27

love that visual of the rooms and

18:29

then the adjacent possibilities. I'm gonna steal

18:31

that and maybe take it and call

18:33

it my own. Classic VC over here.

18:35

What do you think the future of

18:37

video consumption looks like for us as

18:39

consumers? Like am I still looking at?

18:41

Hollywood style feature films that are created

18:43

by Hollywood studios just done a lot

18:45

more cost efficiently? Am I looking at

18:47

a piece of content that's dynamically generated

18:49

to what you know about me and

18:51

it's only for me to watch? What

18:53

do you think the future of consumption

18:55

is? So this is one of those

18:57

that could go in spider in many

18:59

different ways I would say. I'd say

19:01

some of the things we're excited about

19:03

and what we see. So I think

19:05

the future of entertainment is way more

19:07

steerable. So right now you think about

19:09

you sit on your couch like this

19:11

and you maybe scroll through something or

19:13

whatever you cast it on you bring

19:15

it up on the TV So it's

19:17

gonna be way more steerable where you

19:19

can kind of interject if you want

19:21

and maybe take it certain ways We

19:23

think that's one area. We think another

19:25

is personalization like you said if you

19:27

think today about YouTube, Tik Talk, any

19:29

of these algorithms that can kind of

19:31

figure out, this is what you're interested

19:33

in. Imagine that, I think, way more

19:35

extreme, that could be kind of fine-tuned

19:37

to sort of what you want to

19:39

share with the model. I think the

19:41

other bit is a lot of this,

19:43

I think it's going to be generated

19:45

on the fly. So another theory we

19:47

have is that just like there was

19:49

a rise of kind of a creator

19:51

class, a couple, whatever, 10, 15 years

19:53

ago, that powered YouTube and the rest.

19:55

There's going to be a shift or

19:57

maybe it's a different set of people

19:59

that we think of as like curators

20:01

where you curate stuff and you work

20:03

with the model to maybe create things.

20:05

And I think another loop in that

20:07

is how you can remix all this.

20:09

And so that's another big part of

20:11

what we see in the future of

20:13

entertainment, is that there will be like,

20:15

oh, I kind of like that, but

20:17

then make it more like this. And

20:20

if you think, you know, at some

20:22

level, the cost, the time, the skills

20:24

required of this is literally maybe just

20:26

like tapping a button or just describing

20:28

it. And you get kind of different

20:30

versions. percentage is hold. Like we know

20:32

today that a lot of times, certain

20:34

percentage, like 90, 95% just consume from

20:36

platforms. And you're a very small creator

20:38

class. Like, well, that balance change. But

20:40

I see a totally different ways you

20:42

could think about content platforms that have

20:44

some of these native controls. Like, for

20:46

example, will we expect UIs that have

20:48

a join button where, you know, today

20:50

are UIs, maybe have a play, pause,

20:52

whatever. Dave, bookmarks, something good, star, hearted

20:54

it. Like, will there be like new

20:56

things where you join? And they're like,

20:58

oh, hey, Sonia, Ravi, what do you

21:00

want to talk about? Do you know

21:02

what I mean? And I think like,

21:04

that's totally possible. We're building that in

21:06

the notebook, LM today. So that you

21:08

can imagine, play it forward. You've got

21:10

avatars or human-like characters or not, with

21:12

lipri animation, voice cloning, all that can

21:14

come together, in sort of new ways,

21:16

I think. Yeah, I think that's real

21:18

possibility. Yeah, there's a whole interesting intersection

21:20

that's happening right now between movies or

21:22

video content, games and sort of world

21:24

building and 3D. And it's really unclear

21:26

to us right now where that's going

21:28

to go, but there's so many areas

21:30

right now where we're seeing learnings from

21:32

each and even down to some of

21:34

the training techniques, we're finding things like

21:36

that. So actually that's going to be

21:38

one of my questions. Like if you

21:40

look at all the companies building generative

21:42

video models right now, some people are

21:44

kind of going straight from the, you

21:46

know, the pixel stream, so this week.

21:48

And some people are going from the

21:50

3D angle with the idea that, you

21:52

know, to really do video right, you

21:54

need to get 3D. Do you have

21:56

an opinion on that? Yeah, we've actually

21:58

got bets on both sides right now.

22:00

I don't know. I don't know. I

22:02

don't know. Yeah, we're hedged on this.

22:04

And so on the 3D side, we

22:06

have this project we got started where

22:08

we basically said, like, like, take six

22:10

pictures of a sneaker and create a

22:12

3D spin of it. And we put

22:14

that on search. It's been really great.

22:16

And it's amazing how it fills in

22:18

the details. But I think what's interesting

22:20

is we've been going down that path,

22:23

something like VO2 shows up. Now you

22:25

don't need six photos anymore. You need

22:27

like two or three. And you can

22:29

basically do like an entire product catalog,

22:31

like every product that's ever been indexed

22:33

at Google, just overnight. sort of can

22:35

create it. So now you've got a

22:37

3D object, basically of any object, bookshelf,

22:39

chair, whatever, from any angle that you

22:41

can pan, tilt, zoom, relight, and now

22:43

that's like an object that you can

22:45

drop in anywhere. So that's kind of

22:47

the 3D angle. From the video angle,

22:49

it's interesting, like kind of the world

22:51

building. We had this little prototype we

22:53

built. We were like, wouldn't it be

22:55

cool if you could recreate landing on

22:57

the moon for like every classroom? in

22:59

the like, you know, lunar module as

23:01

it's coming down. So we built this

23:03

thing. It's kind of terrifying, actually, because

23:05

we also built a little side panel

23:07

where you can inject problems where it's

23:09

like, oh no, something's on fire in

23:11

the back. They're like, simulate things. We

23:13

had a little fun with it. But

23:15

that was interesting because the models, you

23:17

could say, like, look right. And it

23:19

would actually fill in the details. And

23:21

so you start to get this. That's

23:23

where it feels like it feels like

23:25

it's kind of kind of like it's

23:27

kind of kind of blurring. It's kind

23:29

of blurring. It's kind of like it's

23:31

kind of like it's kind of like

23:33

it's kind of like it's kind of

23:35

blurring. It's kind of like it's kind

23:37

of like it's kind of like it's

23:39

kind of like it's kind of blurring.

23:41

It's kind of like it's kind of

23:43

like it's kind of like it's kind

23:45

of like it's kind of like it's

23:47

kind of like it 2025 everyone's talking

23:49

about agents yes yeah computer agents yeah

23:51

you just said it three times exactly

23:53

I've been called a VC twice today

23:55

this is a very big insult can

23:57

you talk to us about Google mariner

23:59

yeah yeah so mariner One we put

24:01

out in December last year. This is

24:03

a fun one actually because we started

24:05

seeing this capability developing in the model.

24:07

We're trying to understand if you could

24:09

let these models control your computer or

24:11

your browser. What would happen? Good and

24:13

bad. And so that was a good

24:15

example of a project where we went

24:17

from, hey, this capability is kind of

24:19

showing up. Let's put it into, right

24:21

now it's a chrome extension, just because

24:23

it was quick to build, idea in

24:26

people's hands, 84 days. Very fast, very

24:28

fun, a lot of memories made on

24:30

that. But I think what's interesting is

24:32

you're seeing both across Anthropic, Open AI,

24:34

obviously Google, and a bunch of other

24:36

startups in the space are all hitting

24:38

on kind of the same idea that

24:40

models are not just about maybe knowledge

24:42

and information and synthesis and writing, but

24:44

they can do things. And they can

24:46

scroll, they can type, they can click,

24:48

they can not only do this in

24:50

one browser, in one session, but like

24:52

an infinite number. in the background. So

24:54

I think with Mariner what we're really

24:56

trying to pursue is like, of course

24:58

there's the near-term thing of like, can

25:00

it complete tasks in your browser, but

25:02

the bigger thing is, what's the future

25:04

of human-computer interaction look like when you

25:06

have something like this, kind of not

25:08

just one of these things, but basically

25:10

like an infinite number, kind of at

25:12

your disposal. And so that's what we're

25:14

chasing with that project. What do you

25:16

think the ideal use cases are, maybe

25:18

even in the near term for mariner?

25:20

Because I think all the demo videos

25:22

I see, not necessarily from mariner specifically,

25:24

but with computer use more broadly, or

25:26

you know, here, have this agent go

25:28

book a flight for me or go

25:30

order a pizza and door-for me. Like

25:32

that's nice, but like. I like doing

25:34

those things. Yeah, yeah, you're pretty good

25:36

on those on your phone. Booking a

25:38

light is one of my delights in

25:40

life. And so what do you think

25:42

are the killer kind of consumer consumer

25:44

use cases? Yeah, well, that's what's interesting.

25:46

It may not be consumer. And maybe

25:48

enterprise. And one of the things we're

25:50

seeing when we do all the user

25:52

research right now, a mariner, because we

25:54

have an entrusted tester and people are

25:56

playing within giving a lot of feedback.

25:58

is it's really these high toil activities.

26:00

Toil is kind of an old-fashioned word

26:02

that doesn't get used a lot. But

26:04

this is when people talk about it,

26:06

it's like, this is what makes me

26:08

grumpy. And this thing is helping me

26:10

solve it. But what's interesting is a

26:12

lot more of those are showing up

26:14

on the enterprise side. Just to give

26:16

you a couple examples from yesterday, we

26:18

were hearing from one of the teams

26:20

and they're basically, they have this code

26:22

browser use case. So imagine you're in

26:24

like. call center somewhere. Some customer calls

26:26

in. They right now have this very

26:29

complicated way the agent in the call

26:31

center can like remotely take over your

26:33

machine that's not working, browse through things

26:35

and do something for you. They were

26:37

like, we would love to have mariner

26:39

do this. And that's like a way.

26:41

Another one we heard, which was kind

26:43

of interesting, was people, they're like part

26:45

of a sales team or something they

26:47

have. Take a customer call, then they've

26:49

got all these next steps they need

26:51

to do, and they just want to

26:53

fan that out. And it's often updating

26:55

different systems that are all probably, I

26:57

don't know, some SAS subscriptions they're paying

26:59

everywhere. And they're just like the UI

27:01

is clunky, it takes a long time,

27:03

I just want to say mariner, do

27:05

all this. So these are the kinds

27:07

of things that are kind of interesting,

27:09

that are just naturally coming up. On

27:11

the consumer side, I don't know, have

27:13

you found one yet, have you found

27:15

one yet, in your mind, in your

27:17

mind, in your mind, in your mind,

27:19

in your mind, in your mind, in

27:21

your mind, I'm curious. I'm trying to

27:23

think, what, the toil I have in

27:25

my everyday life. Yeah. Talking to Rubby.

27:27

I'm kidding, I'm kidding. Talking to Rubby

27:29

is the best part of my day.

27:31

What do you want to automate that?

27:33

I really appreciate that. I think that,

27:35

but I like the framework, even if

27:37

we don't have the exact use, the

27:39

framework of like, what are the things

27:41

that are the heavy lifting that you

27:43

don't enjoy, right, right, throughout the day,

27:45

that take up time away away. Yielded

27:47

things like Dordash or Instacart, right? Right,

27:49

right. You see how I had to

27:51

get Instacart in there? I'm just making

27:53

sure that that was there. On the

27:55

enterprise side, when you think about it,

27:57

yeah. How are you testing that? Are

27:59

you testing that with existing customers? Are

28:01

you testing that with Google Cloud customers?

28:03

Like who are the enterprises that you

28:05

guys will actually test things with? Yeah,

28:07

so in that case, we kind of

28:09

go across big and small. So there

28:11

will be some cloud customers. We have

28:13

a lot of cloud customers who always

28:15

want the latest and greatest. Give us

28:17

that. They have labs equivalents inside their

28:19

companies, right? So those are awesome test

28:21

beds. We also work with a lot

28:23

of startups. And I mean if there's

28:25

others listening to this that are interested,

28:27

let me, let me know, like, because

28:29

we're always trying to learn kind of

28:32

from different sides of the market. What

28:34

I found too building products over the

28:36

years is it's very common, everyone talks

28:38

about product market fit, you'll know it

28:40

when you see it and all that,

28:42

which is true, but at least for

28:44

me, I've always felt in the first

28:46

part of building products, you iterate a

28:48

lot on the product and sometimes you

28:50

forget to iterate on the market. And

28:52

finding the right market side is also

28:54

just as important as the right product.

28:56

And you have to connect those two.

28:58

And so I think that in these

29:00

early stage things with mariner, that's where

29:02

we are. It's like, is it possible

29:04

for a computer to, like an AI

29:06

model to drive your computer? Yes. That's

29:08

a huge new capability. Is it accurate?

29:10

Sometimes. Is it fast? Not at all

29:12

yet. Like that's kind of where we

29:14

are, in terms of the actual kind

29:16

of use case or the capabilities. And

29:18

then it's about finding the right market.

29:20

But yeah, to answer short, it's kind

29:22

of, in these early days, we do

29:24

lots of stuff really quickly. And what

29:26

I kind of coach our product managers

29:28

on and other people on the team,

29:30

because we have engineers and UXs, they

29:32

all go to these sessions, is like,

29:34

don't look at the customers' eye. And

29:36

when you show them stuff, do they

29:38

light up or not? You know what

29:40

I mean? And like that's kind of

29:42

the signal you're following. It's way more

29:44

art than science at this stage. Can

29:46

we go back for a second just

29:48

to the context point? Because I was

29:50

thinking about this vis-a-vis like you working

29:52

at Google, right? And you talked about

29:54

bringing your own, you know? Is there

29:56

a world where someone can just opt

29:58

in of like, Google knows all. lot

30:00

about me, right, already, you know, my

30:02

searches, my Gmail, my calendar, is there

30:04

a world where you can just sort

30:06

of opt in and be like, I

30:08

don't want to bring it all now,

30:10

I just kind of want you to

30:12

use what you got and make magic,

30:14

right? Is that something that could happen?

30:16

Because Google's uniquely suited to be able

30:18

to do something like that, probably more

30:20

so than anybody. Yeah, is that something

30:22

that you guys can play within labs

30:24

or have a possibility for a possibility

30:26

for, like data on the team? Right,

30:28

where I've opted into a lot of

30:30

things. It was just like, take it

30:32

all. Like, let's make good stuff. But

30:35

I think you'll see some of that

30:37

come through in the Gemini app too,

30:39

where you can link different things. But

30:41

I think it's actually an area that's

30:43

like actively kind of being explored too.

30:45

Of like what types of data is

30:47

like the most interesting and the most

30:49

useful. And of course also the right

30:51

controls where people feel like, okay, I'm

30:53

not just giving it away. Yeah, so

30:55

I think that is an area though,

30:57

that we do experiment on some, but

30:59

I'd say right now a lot of

31:01

the experiments are more on our own

31:03

stuff, as we're trying to figure out.

31:05

You're going to have to tell us

31:07

separately some of the things that you

31:09

could have done now, now that they

31:11

know everything about you. You know, like,

31:13

what is the magic that can be

31:15

created for you? Yeah, I think certain

31:17

things that immediately come to mind that

31:19

are pretty powerful is you can, you

31:21

can't see things like in my own

31:23

data, like in my own data, like

31:25

in my own data, I feel like

31:27

in my own data, I feel like,

31:29

I feel like, I feel like I

31:31

feel like I feel like I feel

31:33

like I have a second brain, I

31:35

have a second brain. There is a

31:37

true, like there's always been this vision

31:39

of a second brain and tools for

31:41

thought and all this stuff. And I

31:43

feel like you can get pretty close

31:45

to that. And I think the Gemini

31:47

model specifically is really good at long

31:49

context, the ability to have this like

31:51

impressive short-term memory. And so Gemini too,

31:53

that's an area we're really trying to

31:55

exploit right now, like how to use

31:57

that. A mariner. Similar question so I

31:59

asked on Vio. When do you think

32:01

we'll have computer use that is accurate

32:03

enough and is fast enough to do

32:05

some of these use cases you talked

32:07

about? Yeah, that's another one. It's kind

32:09

of hard to tell at the pace

32:11

though right now. I mean, not just

32:13

inside Google, but what you're seeing from

32:15

some of the other labs too. They're

32:17

on like about an every month or

32:19

two rev. So you can imagine just

32:21

this year, we're going to see four,

32:23

five, six revs of each of these

32:25

things, right? Again, that's just what we

32:27

know is happening. I think the areas

32:29

that are a little bit trickier harder

32:31

right now is how the computer like

32:33

finally or precisely navigates, like the X,

32:35

Y coordinates almost, like a lat long

32:38

of your screen. And that's still kind

32:40

of. really interesting jagged edges on that,

32:42

I would say. The other big area

32:44

I would say is like this, it's

32:46

more of a human thing. Like, when

32:48

do you want the human involved or

32:50

not? When do they want to be

32:52

involved or not? And kind of creating

32:54

the right construct almost? It's like, hey,

32:56

I'm about to buy something. Oh no,

32:58

I want to know about that. Or

33:00

I'm okay for $5, but nothing more

33:02

than that. Do you know what I

33:04

mean? And so there's a whole bunch

33:06

of almost like hardcore like HCI, like

33:08

research and like really going deep on

33:10

the empathy of like how you set

33:12

those controls. that I don't think any

33:14

of them, including the Google Mariner one

33:16

right now, we don't have, I mean

33:18

we do certain very blunt things, like

33:20

don't buy anything. Don't consent to any

33:22

toss. There's sort of like crude things

33:24

right now that you can do, but

33:26

I think people are going to want

33:28

a more fine-grained way. So these are

33:30

some of the things that are I

33:32

consider more unsolved. Again, that principle, just

33:34

banking on the model is going to

33:36

get smarter. faster, cheaper. And you're going

33:38

to get like four or five, six

33:40

or seven revs this year. Yeah. Okay,

33:42

I have a met a question. Yeah.

33:44

How come all of the research labs

33:46

converged on computer use at like, as

33:48

far as I can tell, the same

33:50

exact point in time. Was that just

33:52

all the technology happened to converge at

33:54

the same time? Like, what happened there?

33:56

Good question. I mean, I don't know

33:58

the specifics there of each of the

34:00

other of each of the other labs.

34:02

It's not uncommon that discoveries kind of

34:04

happen around the same time. And I

34:06

think there's kind of a new paradigm

34:08

now with these models, and I think

34:10

lots of people are seeing the potential.

34:12

in certain ways. And I'm sure there's

34:14

also, I don't know, people changing labs

34:16

and other things that are cross pollinating

34:18

all these ideas too, but it does

34:20

feel like it's one of those is

34:22

kind of how I'm interpreting it is

34:24

like, I think similar with coding, right?

34:26

You saw there's already even the agent

34:28

stuff right now, there's lots of this

34:30

stuff kind of bubbling, which makes it

34:32

really fun, but also keeps you on

34:34

your toes, right? I think Matt Ridley

34:36

is the one who's written about some

34:38

of these things about like adjacent innovations.

34:41

You have Stephen Johnson. Maybe why did

34:43

you hire Stephen Johnson? How did that

34:45

happen? Yeah. Are you going to think

34:47

about other people that don't have obvious

34:49

backgrounds that you would bring in the

34:51

labs? Yeah, yeah. So the quick story

34:53

on Stephen was the guy who kind

34:55

of restarted Google Labs is a guy

34:57

named Clay Bivore, who is a mutual

34:59

friend. Exactly. And he and I have

35:01

big fans, we've basically read everything Stephen

35:03

had written. And Steve was a very

35:05

interesting guy because for like decades, he's

35:07

been in search of the perfect tool

35:09

for thought. And so Clay, Clay cold

35:11

emailed him. We were both subscribers to

35:13

his sub stack. We kind of messaged

35:15

them and we're like, we love you.

35:17

Will you come work with us? We

35:19

can build the tool you've been wanting

35:21

to build. That's where it started, actually.

35:23

And this was, I mean, it was

35:25

like summer 2022. So like before any

35:27

of the, you know, chibity moment or

35:29

anything, and Stephen picked up the phone,

35:31

he was like, yeah, let's do it.

35:33

So he came in, he was a

35:35

visiting scholar, the job ladder didn't exist.

35:37

I had to go figure out with

35:39

our HR person how to create a

35:41

role that he could create a role

35:43

that he could take on. kind of

35:45

history obviously. I've read a bunch of

35:47

Matt's books. I don't know Matt, he'd

35:49

be awesome. So if he's listening. Yeah,

35:51

exactly. He's listening. Come talk to both

35:53

of us. Yeah. I would say.

35:55

Okay, we've done this

35:57

quite a bit.

35:59

So we've actually brought

36:01

in musicians. I'm

36:03

actually really, we're trying to figure out

36:06

right now like a visiting filmmaker. That's cool.

36:08

So it's kind of a model, Steven

36:10

kind of pioneered it. He was the first

36:12

one that it's like, how to bring

36:14

in, it's a big value in labs and

36:16

how do we co -create? We don't wanna

36:18

just make stuff and throw it out

36:21

there. We actually wanna co -create it with

36:23

the people that are in the industry. And

36:25

what we find when we do that

36:27

is you actually get way beyond the like,

36:29

oh, that's cool toy AI feature. You

36:31

get into the workflow. And if you're working

36:33

with someone like Steven Johnson, who's written

36:36

dozen plus books, there's a certain way he

36:38

thinks about and almost like a respect

36:40

for like the sources and the citations. All

36:42

that stuff comes through in notebook L .M.

36:44

And we're doing similar stuff with music

36:46

and video and video and other stuff. Is

36:48

the goal to create new products that

36:50

you can take from one to 100 to

36:53

a billion standalone? Or is the goal

36:55

to find product market fit with things like

36:57

notebook L .M. and then really fold them

36:59

into the Google mothership, so to speak.

37:01

Yeah, it's interesting. So when we first started,

37:03

I would say it was all about

37:05

build something, graduate it. So kind of a

37:08

traditional incubator sort of model. It's been

37:10

interesting as it's gone along. We've done that

37:12

some cases, like AI studio and the

37:14

Gemini API, we graduated and it's now in

37:16

DeepMind and they're kind of running with

37:18

it. Something like notebook L .M. We were

37:20

just gonna keep in labs right now for

37:22

the foreseeable future because it's kind of

37:25

a different creature. Like it's only possible with

37:27

AI and a lot of the stuff

37:29

we're working on now, I mean, we'll have

37:31

to see how many of these we

37:33

can put together that actually can kind of

37:35

get a skate velocity, but we're really

37:37

interested in turning them into businesses and making

37:40

them sustainable and kind of, you know,

37:42

that's been a lot of the focus actually

37:44

is like take big swings and that

37:46

gets back to your point. A lot of

37:48

these won't work because if you're just,

37:50

if they're all working, you're not swinging big

37:52

enough. course, yeah. So it's like trying

37:54

to find that balance but that's definitely, we

37:57

start with kind of could we make

37:59

this a business work backwards from that? And

38:01

if we end up graduating it, that's

38:03

still good. outcome for us. Another good outcome is we stop it and it

38:05

was like cut the losses. We did our 100 day sprint or whatever. Move on

38:07

to the next thing. Yeah. You mentioned at

38:09

the top of the episode that

38:11

you try to do some top-down

38:14

thinking of, you know, what are

38:16

the most interesting pools for us

38:18

to be building in? Yeah, yeah.

38:20

What are your predictions on the

38:22

most interesting pools to be building

38:24

in for 2025? Like, where are

38:26

you hiring talents? Like, where are

38:28

you sniffing around? Where are you

38:30

sniffing around? Where are you cocreating

38:33

with the deep-minded folks? I

38:35

think about them. We have this doc.

38:38

called Labs is a collection of futures,

38:40

and it's 82 predictions about the future,

38:42

which is always dangerous to make one

38:44

prediction about the future, let alone 82. But

38:46

the thought experiment on the team where we

38:48

got to this was, imagine you're in a

38:51

room like this, the ceiling just opens up,

38:53

and this little capsule comes down, we all

38:55

jump in it, and it slings us into

38:57

the future. It's 2028. You can get

38:59

five minutes, look around, write down everything,

39:01

and you're brought back to the present.

39:03

And then write what you saw, and that's what

39:06

this doc is, is so what's the future of

39:08

knowledge look like? What's the future? Even though prompts

39:10

are old-fashioned, that's a pretty good prompt that you

39:12

get into the team. I was just going to

39:14

tell you right now. So that's, you know, we

39:16

think about, we think about it at that level,

39:19

at kind of a high level. So say something

39:21

like, what's the future of knowledge going to look

39:23

like? We think it's going to be one piece

39:25

of that prediction, one piece of that prediction, one

39:27

of that prediction, one of the And

39:29

anything that comes in can be transformed and

39:31

become anything on the way out. If you

39:33

believe that, then you take certain bets and

39:35

you build products kind of with that future in

39:38

mind. So that might be one of them. But

39:40

I think like going back to maybe some of

39:42

the ones that a lot of people might be

39:44

listening or building, I do think we're kind of

39:46

at the moment for video. We're at the moment

39:48

for very interesting agent stuff with the

39:51

thinking and reasoning models. And I think

39:53

there's also maybe something kind of. Under

39:55

the radar right now a little bit,

39:57

I still think coding has major leaps

39:59

for going to see this year. And

40:02

so those would be some of

40:04

the ones that are top of

40:06

mind for us. Are you guys

40:08

doing work on coding at a

40:10

labs too? Yeah, we are. We are.

40:12

So right now at Google, 25%

40:14

of all the codes written by

40:16

AI. Yeah, I saw that. Jeffteen said

40:18

that. Yeah, that's right. And that's

40:20

up a lot in the sense of

40:23

just how fast the progress is.

40:25

This is an area that I

40:27

think there's kind of two approaches you

40:29

could think about, like how again,

40:31

lower the bar raise the bar

40:33

raise the ceiling, right. coding available for

40:35

people who could never write code

40:37

before. Massive opportunity. You know, like I've

40:40

been coding my whole life. Well,

40:42

it's kind of interesting is some

40:44

of the most interesting stuff happening here.

40:46

I don't know if any of

40:48

you have played with like replates agent

40:50

stuff. Really interesting, right? A couple

40:52

of weekends ago, I'm with my

40:54

fourth grade son. We are struggling right

40:57

now in our household to implement

40:59

chores. We created a chore tracking

41:01

app. 28 minutes, 45 cents. Done. We're

41:03

daily active users. And so it's

41:05

a way to kind of get into

41:07

software and a world of kind

41:10

of software abundance that's really interesting.

41:12

So we've got some stuff in that

41:14

area. We're also interested in how

41:16

do you take a professional trained suite

41:18

programmer and make them like 10x

41:20

to 100x. And there's kind of,

41:22

I think, interesting bets on both sides

41:25

of that. Yeah. What do you

41:27

think is overhyped in AI right

41:29

now? Oh, that's an interesting question. I

41:31

wish we'd move up beyond the

41:33

chatbot interface a bit. That's one area

41:35

that feels like we're kind of

41:37

reusing that in a lot of

41:39

places. Google included. I'm also not sure.

41:42

There's still a lot, I think,

41:44

of like people jamming AI in this

41:46

stuff. Like AI itself is a

41:48

bit overhyped. I wish we were

41:50

a little more precise about how disruptive

41:52

or like where to apply it.

41:54

Again, we're trying to think a

41:56

lot about like workflows, not just taking

41:59

existing product and bolt on AI.

42:01

So I think. that's maybe a little,

42:03

there's a race, like you're seeing

42:05

the first generation of AI, put

42:07

it in, and it reminds me a

42:09

lot. Actually, when I first started

42:11

at Google, it was like right as

42:14

the iPhone moment was kind of

42:16

just happening and taking hold. When

42:18

Steve walked on stage in 2007, said,

42:20

this is the iPhone. If you

42:22

look at the App Store three

42:24

years later, which is roughly where we

42:26

are in this AI revolution, the

42:28

App Store in 2009-ish, I went back

42:31

and checked. Websites that have been

42:33

shrunken down to fit on to

42:35

fit on your phone. Flashlight apps and

42:37

FART apps. These were like the

42:39

highest top download of things that were

42:42

happening. So I think we're kind

42:44

of in this stage where the

42:46

real stuff is going to start to

42:48

come out kind of this year,

42:50

next year, the next year. That's

42:52

when you start to see the obers,

42:54

the air B&Bs, the things that

42:56

really change. kind of how you do

42:59

stuff. And so that's that's kind

43:01

of my thought on it. All

43:03

right, then Sonia asked you the overhyped

43:05

question, I'll ask you the under

43:07

the radar underhyped question. What are some

43:09

areas that deserve more attention within

43:11

AI? We talked about coding a

43:13

little bit. Maybe just one other thought

43:16

on that is I think if

43:18

you can get code models that

43:20

can kind of write code and self-correct

43:22

and self-heel and migrate and do

43:24

all this stuff, it just makes you

43:26

think the pace as fast now.

43:28

That totally changes the So I

43:30

think that's a huge, I still think

43:33

it's underhyped. It's hyped a lot,

43:35

by the way. But I think as

43:37

hyped as it is, it could

43:39

be hyped more. That's one. I

43:41

don't think we fully internalize the notion

43:43

of like what does long context

43:45

or like infinite context mean. It

43:47

gets to some of your personalization questions

43:50

potentially, but it also gets at

43:52

some of the stuff we were talking

43:54

about around how can you make

43:56

things like a mariner, literally just

43:58

keep going. And so that whole notion

44:01

of long context, I mean you'll

44:03

see a lot from Google, but we're

44:05

investing a lot in that because

44:07

we think that's a strategic lever

44:09

that's important, especially as you get more.

44:11

chain together kind of workflows. Maybe

44:13

another one, I think there's not

44:15

enough talk about taste. Like I think

44:18

if you believe the value is

44:20

gonna be in the application layer, if

44:22

you believe there's gonna be some

44:24

percentage of AI slop, then you

44:26

can just see a few of these

44:28

trends. And I think there's gonna

44:30

be a value in good taste and

44:33

good design. And it doesn't mean

44:35

it has to be human created

44:37

necessarily, although I think there's going to

44:39

be a high value on that

44:41

too, as like human crafted content

44:43

becomes more artisan. But I think that's

44:45

another one, I would say. I

44:47

think maybe related to that's like veracity

44:50

and truth, and sort of what

44:52

is real. Like these are things

44:54

that I think are going to become

44:56

way more important than they already

44:58

are today. I think the context point

45:00

within there I like really firmly

45:02

agree with on like what... can

45:04

happen with infinite shared context. Think about

45:07

how far away that is from

45:09

now, where you're like typing things

45:11

in about what it is in your

45:13

point of like, well, hold on,

45:15

there's all these different ways you can

45:18

communicate it and then can get

45:20

to know you better if it

45:22

has memory. And so I think there's

45:24

so much gold in there of

45:26

it just being able to keep going,

45:28

right? Giving it the right context

45:30

and whatever it needs. We think

45:32

of any company that you all back

45:35

or even Google, like what's one

45:37

of the most painful things is

45:39

when a long-term employee leaves? Because all

45:41

that context walks out the door.

45:43

So I think it's exactly right, whether

45:45

it's a personal relationship or a

45:47

work relationship. Yeah. Okay, we're gonna

45:49

wrap with a rapid fire around. All

45:52

right, yeah, sounds good. Okay. Favorite

45:54

new AI app? Oh, I mentioned it

45:56

earlier. I'm having a lot of

45:58

fun with Replet. Love it. The

46:00

new agent thing and on the phone.

46:02

I think they're doing some really

46:04

interesting stuff there. You know, one

46:06

of our partners, Andrew Reed, is known

46:09

for like creating these amazing memes

46:11

and something around. It's now so easy

46:13

to create an app. He just

46:15

creates all the time and sends

46:17

them to me. They're really good. Yeah,

46:19

we have this concept of like

46:21

disposable software. You need to use it

46:24

once and you kind of throw

46:26

it out after you're done with

46:28

it. So yeah. Okay,

46:30

recommended piece of content or reading for

46:32

AI people. Oh, that's an interesting one.

46:34

You know, this one's not a traditional

46:37

AI pick. Because I think probably a

46:39

lot of the listeners here, I was

46:41

going to say, over the break, I

46:43

read a lot. And one of the

46:45

books I picked up was actually, it's

46:47

the Lego story. And it's the history

46:49

of Lego. And it's on his third

46:51

generation of family ownership. I'd recommend that

46:54

one. It's a really interesting, yeah. Here's

46:56

why though. There's a pivotal moment in

46:58

the company's history where they had 260

47:00

products. And maybe for a lot of

47:02

founders that are listening, you can imagine

47:04

your company could go in like all

47:06

these different ways. You're trying to figure

47:08

it out. And the grandfather, the CEO

47:11

at the time, basically identified like the

47:13

little building blocks. This is it. And

47:15

he bet the company on it. And

47:17

he bought these incredibly expensive machines. And

47:19

so I think it's like an incredible,

47:21

I like to read biographies a lot,

47:23

and this was one that really stood

47:26

out. Josh has an incredible taste in

47:28

books, and he has this wonderful reading

47:30

list that he's been kind enough to

47:32

share with me. Oh, no way. That's

47:34

really wonderfully curated. It has this very

47:36

good formatting as to when it's something

47:38

you really got to read versus not.

47:40

And so you should, to all the

47:43

listeners, you take Josh's suggestions seriously. I

47:45

actually really want a great AI reading

47:47

up. That's like my wish list up.

47:49

What would you do for you? In

47:51

part because I have terrible memory, but

47:53

out of everything I've ever read or

47:55

listened to, which I think is a

47:57

different set of things than all the

48:00

books on the planet. Yeah. Like, there's

48:02

all these things that are kind of

48:04

on the tip of my tongue and

48:06

ideas that connect. But, you know, they're

48:08

all kind of in an abyss, and

48:10

they're all pretty inaccessible to me. And

48:12

so something that surfaces, some of those

48:15

thoughts and ideas that I've had, things

48:17

that I've read, you know, that next

48:19

layer of thought I have from reflecting

48:21

on two different things that I've read.

48:23

And the connections probably across them. Yeah.

48:25

It's good idea. It's good idea. the

48:27

hard copy version, the Kindle version, and

48:29

the audio book version being like, you

48:32

know, seamlessly intertwined. You're like, you're interested

48:34

at the most basic level, you know,

48:36

so that you can continuously pay attention

48:38

to something that you like, and then

48:40

we can get to the version that

48:42

you said, yeah. Request for startup. Okay,

48:44

pre-training, hitting a wall, agree or disagree?

48:46

Oh. Maybe lean agree? I think there's

48:49

still stuff to squeeze out there, but

48:51

I think a lot of the focus

48:53

has shifted. Yeah. In video, long or

48:55

short? I don't give stock advice. Index

48:57

fund. Do you ever sit with Demas

48:59

and be like, look, as someone, between

49:01

us, we want a Nobel Prize. Do

49:04

you ever start with that, you know,

49:06

because, you know, that feels like something

49:08

that's true. You know, between the two

49:10

of you, there's one Nobel Prize. It's

49:12

all one direction. It's Timothy and John

49:14

jumper. Those are the people that won

49:16

the Nobel Prize, not Josh Woodward. Okay.

49:18

Any other contrarian takes in the eye?

49:21

Any other contrarian takes? I guess maybe,

49:23

I'll leave it with this. I think

49:25

we are kind of, one thing is

49:27

like, what a time to be alive

49:29

and building. Because I feel like there's

49:31

this window where there's like. So many

49:33

adjacent possibilities opening up. I think the

49:35

second would just be like, I'd encourage

49:38

people listening to like really think about,

49:40

of course, there's the models and who's

49:42

winning in the back and forth, but

49:44

like, what are the values you're building

49:46

into your company? Because I think this

49:48

is one of those moments where there's

49:50

going to be like tools created that

49:53

shape like. follow-on generations. I think

49:55

it's think it's really

49:57

important people think

49:59

about that that. And like, are

50:01

you trying to replace

50:03

and and eliminate people

50:05

you trying you trying

50:07

to amplify human creativity?

50:10

I mean, there's like one you know, it immediately

50:12

comes to mind when I'm thinking comes to

50:14

for example, I'm I'm on the side

50:16

of wanting to amplify human creativity, but

50:18

I think there's like, there are these

50:21

moments that I in our Valley here there are

50:23

these moments that change and they change often

50:25

for generations and they can change for

50:27

good or bad. good or And so so would

50:29

just encourage people that are in in spots

50:31

where you're building and you you have this

50:33

incredible technology that's only getting smarter getting faster

50:35

and cheaper and put it to good

50:37

use to good think about the consequences downstream. Thank

50:40

you so much, Josh, so joining

50:42

us. joining this conversation. Yeah,

50:44

thanks again. again. White.

51:14

is on. you

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features