Vercel AI with Lee Robinson

Vercel AI with Lee Robinson

Released Tuesday, 30th January 2024
Good episode? Give it some love!
Vercel AI with Lee Robinson

Vercel AI with Lee Robinson

Vercel AI with Lee Robinson

Vercel AI with Lee Robinson

Tuesday, 30th January 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Vercel provides a cloud platform to

0:02

rapidly deploy web projects, and they

0:04

developed the highly successful Next.js framework.

0:07

The company recently made headlines when

0:09

they announced v0, which

0:11

is a generative AI tool to create

0:13

React code from text prompts. The

0:16

generated code uses open source

0:18

tools like Tailwind CSS and

0:20

ShadCnUI. Lee Robinson is the

0:22

VP of Product at Vercel. He helps

0:24

lead the product teams and focuses on

0:26

developer experience on the platform. He

0:29

joins the show to talk about Vercel,

0:31

their AI SDK to easily connect front-end

0:33

code with LLMs, the v0

0:36

AI tool, and more. This

0:39

episode of Software Engineering Daily is

0:41

hosted by Sean Falconer. Check

0:43

the show notes for more information on Sean's work and

0:45

where to find him. Lee,

0:51

welcome to the

0:53

show. Hey,

0:59

thanks for having me. I'm excited to be here. Yeah, thanks

1:01

for being here. I've been to a

1:04

few Vercel meetups over the past year

1:06

or so. I've been in the

1:08

same room as you, but we've never spoken until now, which

1:11

is kind of funny because I've worked in developer experience,

1:13

developer relations world for quite some time. It's

1:15

kind of a small world in a lot of ways,

1:17

so I'm surprised we haven't ended up interacting more. But

1:20

it's great to finally have this interaction now in this

1:22

format. So I get to ask you lots of questions

1:24

and kind of dive into the details here. Yeah, and

1:26

thanks for coming to the events. That's awesome. Yeah,

1:29

so maybe a good place to start is for the

1:31

audiences. We can start with some basics. Who are you?

1:33

What do you do at Vercel? Yeah,

1:36

I'm Lee. I am now the VP of

1:38

Product at Vercel. So I help lead our

1:40

product teams here as well as think about

1:43

how we make a great experience for developers

1:45

on our platform, whether that's

1:47

going and talking to customers, talking to

1:49

developers, working on our documentation and

1:51

making sure that it's technically accurate

1:53

and compelling for developers and just trying

1:56

to build products that developers love. Awesome,

1:59

yeah. I think as a Versailles

2:01

and also an XJ S user, I feel

2:03

that from, and I think this is some

2:05

of the things that we'll talk about today.

2:07

So you've been there around, I think, four

2:09

years, kind of had a number of different

2:11

roles there, starting in DevRel to like dpf.x,

2:13

and I guess now dpf product. So I

2:15

guess, what are some of the biggest changes

2:18

that you've seen since you started? It seems

2:20

like Versailles has really sort of blown up,

2:22

at least from my outside perspective, over the

2:24

last couple of years. So I'm sure things

2:26

have changed massively. Yeah, I was just reflecting on

2:29

the growth of Next.js over the past four

2:31

years, and it's been quite

2:34

a journey. Since I joined, it's grown

2:36

about 1000% in terms of monthly active

2:38

developers. So we now have almost

2:40

900,000 monthly active developers,

2:42

which is pretty exciting to

2:44

see really some incredible companies choosing

2:46

to build their web experience with

2:49

React and with Next.js. So the

2:51

growth of Next.js has been one

2:53

part that's been fun to watch,

2:55

fun to be part of, and

2:57

also just inspiring to see what

3:00

our customers are building. With Vercel,

3:02

I think when I joined Vercel

3:04

was, we were a much

3:06

smaller company, and we were really trying to figure

3:08

out how to take the

3:10

early success that we had had and help

3:12

bring it to more of the

3:15

largest websites on the web. And

3:17

along the way, I think we found the

3:19

right way to think about

3:22

the value that Vercel provides the world

3:24

and maybe how we're a little bit

3:26

different than other products or other companies

3:28

that exist. And the key terminology, the

3:31

key framing here is that

3:33

we're a frontend cloud. So if you

3:35

think about AWS, Google Cloud, Azure, these

3:37

back in cloud type solutions, you can

3:39

go there and purchase a

3:42

bunch of Legos, and you can put

3:44

them together and build your own solutions.

3:46

And those are really good

3:48

Legos. They're custom built for a specific purposes

3:50

like a queue or a database or,

3:53

you know, some specific need that

3:55

you need. But we

3:57

really noticed there was this whole category

3:59

of services and problems

4:01

to be solved around just the front

4:04

end specifically. That wasn't really getting the

4:06

amount of love that we thought that

4:08

it should get. So we've really focused

4:11

in on how to create just the

4:13

best front end experience and

4:15

a product specifically for the front end

4:17

developer. And that requires

4:19

both the framework, which is why

4:22

we created and maintain Next.js, but

4:24

also the infrastructure that gets created

4:26

by using the framework as well

4:28

too. So we have basically two

4:30

products that we sell at Vercel. We

4:32

have a platform for developers, a developer

4:34

experience platform, and then we also have

4:36

managed infrastructure. So you come to Vercel's

4:38

frontend cloud and you get to take

4:40

advantage of deploying any of your favorite

4:42

frameworks and just having

4:44

it basically scale automatically with

4:47

our managed infrastructure. I think historically

4:49

we've kind of paid more

4:51

attention to the back end

4:53

and the infrastructure. I think even in the way

4:55

that we think about where

4:57

hard engineering problems exist, but

5:00

maybe over time because we

5:02

can do a lot more on the front end than we

5:04

could do even five or ten years ago. And

5:07

I think what we're essentially asking our applications

5:09

to do or front end applications to do

5:11

is now much more sophisticated than

5:13

it was a few years ago. So

5:15

do you think that is partly why

5:17

this gap existed and where

5:19

you've been able essentially to carve

5:22

ownership or sort of claim ownership over the front

5:24

end cloud because this gap did exist

5:26

and no one was really focused on sort of servicing the needs

5:28

of the front end? Yeah, I think

5:30

that's exactly it. When we look back, I know

5:33

that when I was getting started in my career,

5:35

the types of websites and web

5:38

applications that companies were building were

5:41

much smaller in comparison to what

5:43

the web looks like today. And

5:46

largely this was because the tooling at the

5:49

time was a little bit more difficult to

5:51

use, for a lot of developers, they looked

5:53

at the front end as this kind of

5:55

toy thing. The back end was where the

5:57

real work happened, and the server and of

6:00

their back-end logic, that was where the

6:02

tough engineering problems were. And

6:04

what the early team at Vercel and what

6:06

Guillermo and what a lot of the first

6:09

engineers here saw was that there's

6:12

some really hard problems to be solved

6:14

in the front end, both in how

6:16

you write your JavaScript code or how

6:18

you build your user interfaces, but also

6:20

how you make the center

6:22

of your digital storefront or the center

6:24

of your digital property, your customer.com, how

6:26

you make that available for customers everywhere

6:28

in the world and you make it

6:31

fast for customers everywhere in the world.

6:33

And that last mile of, okay, we've

6:35

got an amazing back-end, super fast. Our

6:37

API is incredibly fast. They scale up,

6:39

you know, perfect database architecture, amazing. But

6:42

now, how do we actually make a

6:44

product that customers love and want to

6:46

use? And that comes down to the

6:48

front end, the UI, the user experience,

6:50

the design, and that's really where

6:52

we try to meet these folks at. Because what

6:55

we found when we talked to a lot of

6:57

developers, when we looked at the market was, the

6:59

front end developers didn't really feel like they had

7:01

a company that was sticking up for them, that

7:03

was building products for them, that was helping solve

7:05

their needs and essentially

7:08

allowing them to focus on the product

7:10

experience, focus on the user experience, and

7:12

let someone else handle all of those

7:14

bits. And especially as we move forward

7:16

into this kind of AI

7:18

first, AI forward world, these tools

7:21

enable you to spend so much

7:23

more time just building product code

7:25

and not having to think about

7:27

the connective bits between all

7:29

these pieces. So you're absolutely right

7:31

that the front end has, you know, maybe

7:34

10 years ago was looked at as maybe

7:36

less than the back end. But I think

7:38

today, a lot of developers are realizing not

7:40

only the opportunities for optimizing

7:42

your front end, whether that's improving the

7:45

conversions on your ecommerce site, or decreasing

7:47

site load performance, which can have, you

7:49

know, major impacts on your business and

7:51

on SEO, a lot of more

7:54

developers are realizing that and to achieve that they

7:56

have to become proficient in the front end. And

7:58

really, that's where we come in. Yeah,

8:00

I would think that also, you know,

8:02

consumer expectation has shifted as well. Like

8:05

as a consumer, what I'm expecting

8:07

from my application is,

8:09

you know, different than it was, you know,

8:11

a few years ago, like I'm expecting lightning

8:13

fast. Like I get frustrated if, you know,

8:16

isn't immediately responsive and actually what's happening, you

8:18

know, it's pretty amazing. You know, if I

8:20

think back to like when I was in

8:23

high school on a, you know, 56

8:25

K by modem, like, you know, praying for an image

8:27

to load or something like that to where we are

8:30

now, like our expectations have shifted.

8:32

And that's also required a significant

8:34

amount of technology innovation. And

8:37

also the scale of what we're doing on

8:39

the front end has changed, not only with

8:41

the amount of people that we're serving also

8:44

globally, but also in terms of just

8:46

the amount of code that you're sort

8:48

of writing for the front end. So

8:51

that makes it harder for teams to

8:53

essentially build and scale what they're doing,

8:55

like operationally from like an engineering team

8:57

perspective versus something that's like a

9:00

much smaller, essentially application of serving as much, you

9:02

know, more constraints. And I think that's where it

9:05

essentially that the market has to

9:07

respond with innovation and also respond with,

9:09

I think, you know, really good

9:11

engineers starting to understand that, Hey,

9:13

there's like huge opportunities. There's hard problems to solve

9:15

in this space. And I should take a look at

9:17

this rather than just sort of focus on the back

9:20

end or the back downstream services. Yeah,

9:22

absolutely. So you talked a little

9:24

bit there in your introduction about sort of your

9:26

focus and for Sal on sort of

9:28

the developer experience and making sure that's really good

9:31

and it's clear that, you know, I think

9:33

that there's strategic weight put into the value

9:36

of developer experience from Bursale and in terms

9:38

of next JS. But

9:41

why was that an important investment for

9:43

Bursale? Like, you know, there's a lot

9:45

of go to market product strategies, but

9:47

why was Devax, you know, at least from

9:50

my perception, like a P zero for, for Sal

9:52

versus something that I think a lot of companies

9:54

may be discount or maybe they look at down

9:56

the road. Yeah. Well, first and foremost,

9:58

our team. are tool

10:01

builders. They're people who

10:03

care about the craft, because

10:05

they build and improve

10:07

and sharpen their own tools, and that

10:09

helps them create better products. So we

10:11

have this cycle of

10:14

caring about the craft of building

10:16

a great front end, building a great product,

10:19

having good tools that enable us to

10:22

actually do that, and making tools that

10:24

satisfy that demand for ourselves. So we

10:26

care about the developer experience on a

10:28

really fundamental level, because not only is

10:31

it helping scratch the itch of us

10:33

building great tools to help us build

10:35

better products, but interestingly enough, that's the

10:38

same thing that our customers want as

10:40

well too, especially when we can validate

10:42

that these tools actually enabled

10:45

us to build really incredible things

10:47

as well too. So developer experience

10:49

from the start has always

10:51

been a foundation of how

10:53

Vercel thinks about the world, how Vercel

10:55

thinks about building great products, and

10:58

we've had the opportunity to

11:00

inject that into basically

11:03

everything that we do, both at the

11:05

framework and next-js level, trying to make

11:07

sure that this is a smooth experience

11:09

for customers, but then also the little

11:11

bits of polish and user experience and

11:14

developer experience wins that you can have

11:16

when trying to make a platform that

11:18

kind of takes away all

11:20

of the hard parts about getting your

11:22

website or web application online. So I'll

11:24

give you one example of how

11:27

we think about implementing some great

11:29

developer experience. We have a

11:33

philosophy that we call framework defined

11:35

infrastructure, and if you think about

11:37

how the majority of front-end teams

11:39

have been building their applications kind

11:42

of up until now, let's say,

11:45

they spend a lot of time taking

11:48

the framework code that they

11:50

write and then also writing

11:52

this intermediate layer of infrastructure

11:54

as code, which that's even

11:56

improvement over the past when there was an infrastructure

11:58

as code and you were manually building out

12:00

all of your infrastructure in some settings pages

12:02

inside of AWS, right? Then there was the

12:05

terraforms and the AWS CDKs and like there

12:07

were more tools that allowed you to at

12:09

least check this config into

12:11

version control and have a review process

12:13

on it, right? That's helpful. That's definitely

12:15

an improvement over the previous status quo,

12:18

but revisiting that experience kind

12:20

of from the first principle of

12:22

what's the best experience for developers.

12:25

Well, when I'm trying to build a

12:27

great product and I've got this powerful

12:29

framework, I actually don't want to have

12:31

to think about the wiring of the

12:33

bits of, okay, I'm rendering my homepage,

12:35

I'm creating a homepage. How does that

12:37

actually turn into a file that gets

12:40

cached on an edge network somewhere? How

12:42

does that actually turn into compute that

12:44

is able to serve up a dynamic

12:46

personalized version of my page? So

12:49

this idea of framework-defined infrastructure is

12:51

you write code in whatever front-end

12:53

framework you want, whether it's Next.js

12:55

or any of the other 30

12:57

plus frameworks that

12:59

we support, and you deploy to Vercel, and the

13:02

developer experience part here is we took the

13:04

hard part of figuring out how to map

13:07

those framework outputs into

13:09

global infrastructure, into managed infrastructure, and we

13:11

just built that part for you. So

13:14

we maintain the layer of adapters, essentially,

13:16

that converts that code for you. So

13:18

you just write your code, you push

13:20

it up, and we'll take care of

13:22

that part. How far do you

13:25

think you can go with this sort of approach

13:27

of a framework infrastructure?

13:29

Does a company

13:31

reach a point where this no longer scales,

13:33

and they kind of have to go in

13:35

and actually have their own infrastructure team to

13:37

go and sort of turn the right knobs

13:40

and so forth? Or

13:42

can this essentially, can I

13:44

go from very beginning ideation

13:46

stage to global scale with

13:48

massive reach? So

13:51

developer experience can't come

13:53

at the cost of observability.

13:55

So while framework defined

13:57

infrastructure is great, and it's awesome that you

13:59

can... and build small

14:01

to large applications with this, that doesn't

14:03

mean that you should sacrifice being able

14:06

to understand what the generated outputs of

14:08

the managed infrastructure are, because at the

14:10

end of the day, you're still responsible

14:13

for ensuring that your product

14:15

is online and that your customers are seeing

14:17

a great experience. Of course, we're gonna manage

14:20

that infrastructure for you, but you wanna ensure

14:22

that we have that relationship where you understand

14:24

the observability of your system, whether it's user

14:26

code that you introduced that could have been

14:29

a regression, for example, we give you tools

14:31

kind of to deal with that. As

14:44

a business, you care about revenue,

14:46

but as an engineering team, the

14:48

last thing you want to do

14:50

is build infrastructure for billing. It's

14:52

a headache, time zones, proration, tracking

14:54

usage, and let's not even

14:56

talk about integrating Salesforce or NetSuite, that's

14:59

where Orb comes in. Orb is

15:01

a flexible usage-based billing engine for

15:03

modern pricing. The fastest growing companies

15:05

like Vercell and Repl.it trust Orb

15:08

to power billing, invoicing, and financial

15:10

reporting. Let Orb help you get

15:12

back to building, not billing. Listeners

15:15

of Software Engineering Daily get

15:17

a free copy of the

15:19

leading book on pricing and

15:21

packaging, Monetizing Innovation, when they

15:23

sign up and book a

15:25

meeting at softwareengineeringdaily.com/orb. So

15:39

I guess going back to your question, your framing,

15:41

is there a point at which this

15:43

framework-defined infrastructure doesn't

15:45

work? And what we've seen is

15:48

that for hobby projects,

15:50

for small to medium-sized businesses, and

15:53

even to some larger enterprises, this approach

15:56

can work really well because you're

15:58

effectively building a. what I

16:00

like to call a majestic monolith. You

16:03

write your code maybe in a

16:05

monorepo, maybe not in a monorepo,

16:07

where you have this co-location of

16:09

what traditionally was kind of a

16:11

separation of your front end and

16:14

your back end connecting defined code.

16:16

Because as front end has

16:18

evolved, a lot more of that, what

16:21

was traditionally like this layer of code in

16:23

between to connect your front end and the

16:25

back end has actually moved closer towards the

16:27

front end. And the back end is more

16:29

of the queues and the databases and the

16:31

separate parts like that. So in this world,

16:34

in your majestic monolith, you're

16:36

deploying to Verzell, you write code like it's

16:38

a monolith. But then the framework

16:41

defined infrastructure takes the outputs and

16:43

gets to optimize each of the

16:45

parts individually. So it can determine,

16:48

okay, this page, we're not doing any

16:50

user personalization or customization. Let's put that

16:52

behind an edge network. Let's add the

16:54

correct caching headers. Let's just make this

16:56

thing really fast and put it close

16:58

to users all over the world. This

17:01

page or this component on a page

17:03

is doing something dynamic. It's looking at

17:05

the incoming headers. It's looking at the

17:07

cookies. We're trying to do some

17:09

kind of personalized banner for users returning to

17:11

an e-commerce experience. For that, we're gonna need

17:13

to run some compute. So we're gonna have

17:15

some compute that we can put close to

17:17

our database and make it really fast

17:19

to essentially have this personalized experience on

17:21

our site as well too. So

17:23

we've talked to a lot of

17:25

customers that this approach works really

17:28

well, but for the largest, largest

17:30

customers that we work with, they

17:32

still have completely separate teams of

17:34

folks who are building their traditional

17:36

back-end cloud, back-end infrastructure, usually

17:38

because they also have a mobile application

17:41

or some other properties that are also

17:43

using kind of a same shared API,

17:45

right? Maybe it's a GraphQL API,

17:47

maybe it's a REST API. So for

17:49

their front end, As it's a

17:51

decoupled composable architecture, for that bit, they're

17:54

still connecting and talking to this API.

17:56

They're not just going direct to the

17:58

database, like, maybe... A smaller

18:01

project yourself sir business my do for

18:03

them. They still had this security layer

18:05

and this eight the I layer in

18:07

between their back their database. And

18:10

then how is that sort of

18:13

mapping are translation happen? Is this

18:15

like up? More like a horrific

18:17

faced. Translation. That's happening. Or is

18:19

there some sort of in a machine learning component

18:21

to figure out? how do we actually optimizers based

18:23

on what someone's trying to do on the front

18:26

end? Yeah. So how

18:28

the translation happens for for him

18:30

or define infrastructure is that we

18:32

publish a specification that is essentially

18:35

like and eighty I for frameworks.

18:37

The. Specification defines. What?

18:40

The output format is. And.

18:43

If code is deployed in this output

18:45

format, then for Cel knows how to

18:47

convert that into infrastructure. Basically, And.

18:49

The cool part about this specification

18:51

being public and published is that

18:54

while we have adapters for the

18:56

frameworks to convert into this output,

18:58

you can technically build your own

19:01

framework or take your custom internal

19:03

company roll the framework. And.

19:06

Convert. It into this output through a

19:08

scripts. And. Then it works the same

19:10

way as the rest of all of these

19:12

other frameworks. A can take advantage of the

19:15

same cloud infrastructure primitives, the same as infrastructure,

19:17

the same work flow, and Brussels developer experience

19:19

platform. It's all integrated and same thing. So

19:21

that's how we think about the translation. Okay,

19:25

so someone can necessarily to override

19:27

some of the default experience x

19:29

ray, something that maybe a little

19:31

bit more customize for whatever their

19:33

application needs. A are. Yeah.

19:35

And life at the framework level.

19:37

We also. Gives. Developers.

19:40

The specifically in next? Yes, still the

19:42

tools to. Control. Their.

19:45

How their infrastructure behaves. So for

19:47

example, you want to control the

19:50

maximum duration or the memory of

19:52

the deployed Compute for here for

19:54

cell function. So. i want

19:56

to get your thoughts on how a i

19:58

sort of fits into this both on

20:00

the Dedek side and some of the investments

20:02

that Versailles is making. With

20:04

all the development happening in Gen A's AI

20:07

space right now, improving developer

20:09

productivity, like GitHub Copilot was of course a

20:11

big one from a little over a year

20:14

ago. What are your thoughts

20:16

in how generative AI is going to

20:18

change what it means to have a

20:20

great developer experience? So

20:23

we started seeing a large

20:25

increase in the amount of developers deploying

20:28

AI applications to Vercel. Really

20:30

I would say at the beginning of

20:33

this year, at the beginning of 2023, and

20:35

when we started to look, a

20:38

lot of this corresponded with the

20:40

growth of OpenAI and ChatGPT I

20:42

think, but also the amazing number

20:44

of open source models that are

20:46

being created and generated and developers

20:48

looking for a way to run,

20:50

compute that then talks to these

20:52

models. And when

20:54

we looked and we talked to these developers who are

20:57

wanting to build with AI, we learned

20:59

a couple of things. One is that

21:02

for a lot of existing workloads,

21:04

existing applications, there are ways

21:07

today for developers to sprinkle

21:09

in a little bit of

21:12

AI enhanced tooling to

21:14

drastically increase their product

21:16

experience in ways that

21:19

previously would have been extremely difficult to

21:21

do. And this is really bringing

21:24

this whole new revolution to the

21:26

front end, especially when you think

21:28

about natural language interfaces.

21:30

I think the most common one that

21:32

we've seen from talking to customers is

21:34

similar to how ChatGPT got so popular with

21:37

a chat interface, is

21:39

taking that same idea of

21:41

talking in natural language to

21:43

every other part of the

21:45

front end, every other part of your business.

21:48

So maybe that's how can

21:50

I write natural language and query

21:52

my database? Maybe that's how can

21:54

I write natural language and look

21:56

at all the documentation and support

21:58

articles on my site. to get really

22:00

good responses for how to debug an

22:02

issue or how to get help rather

22:05

than having to wait to talk to a human,

22:07

for example. And for a

22:09

lot of these customers, they're coming to

22:11

Vercel because we're able to help accelerate

22:13

how fast they can take advantage of

22:16

leading edge technology, the latest technology.

22:18

It's been interesting because as

22:21

technology waves have

22:23

happened in the past, you know, there

22:25

was a lot of innovation and a

22:27

lot of excitement around crypto and we

22:29

saw a lot of those customers also

22:31

explore, you know, what does this mean

22:33

for us on Vercel? And we're still

22:35

seeing some folks doing that as well

22:37

too. It's a much

22:39

larger wave, I think with AI

22:42

as well, where customers are seeing

22:44

more clear opportunities of how using

22:46

large language models, AI tools can help

22:49

make their products better. So

22:51

that's like, that's step one. That's what we've seen a

22:53

lot of people do today. Now

22:55

what we're starting to see is a

22:58

lot of newer customers, startups,

23:00

startups that operate inside larger

23:02

companies, take a look at their product

23:04

and reimagine what would

23:07

this product look like designed in an

23:09

AI first world? How would we change

23:11

the front end? How would we innovate

23:14

on the front end to provide a

23:16

product experience that previously was impossible to

23:18

do? And this spans across basically every

23:21

single industry, even Vercel's own product. We're

23:23

thinking about how are there ways in

23:25

a world where you have this powerful

23:27

large language model at your side, whether

23:30

it's an open source model or some

23:32

larger foundational model, how do you take

23:34

these and enrich the entire experience?

23:37

And when you start to go down

23:39

that path, the design and the user

23:41

experience of the front end looks drastically

23:43

different than maybe the last generation of

23:46

websites. So these customers are

23:48

coming to Vercel, they're using Vercel's front end

23:50

cloud, and we're helping

23:52

them accelerate towards getting

23:54

ahead of their competitors, building out

23:56

this AI first advanced version of

23:59

their site. this AI native

24:01

experience? Yeah, I mean, I

24:03

think, you know, what you're saying around

24:05

like chat and sort of changing interfaces

24:07

to be more sort of, you know,

24:10

natural language to humans, completely makes sense

24:12

as sort of the baseline starting point,

24:14

because we've kind of gotten

24:16

good at training ourselves in how to

24:19

like, unnaturally interact with technology. Like, you

24:21

know, if you want to know, restaurant

24:24

in New York City, you go

24:26

to Google search and you type in, you know, restaurants in

24:28

New York City, but I would never come up to you

24:30

and be like, restaurants in New York City, and you'd be

24:32

like, Oh, okay, like, you know, here's my favorite, you know,

24:34

top 10 list, but I can now actually go to chat

24:37

QPT or, you know, some other type of

24:40

LLM based system, and actually ask it a

24:42

question, like I would ask a friend and

24:44

actually generate a response as if, you know,

24:46

it's my friends to some degree. And

24:48

then I can ask follow ups and have

24:50

like an actual like human conversation. I think

24:52

that's transformative. That's kind of like the baseline.

24:54

And I think where, you know, another opportunity

24:57

in this space is, and

24:59

I'm curious to hear if you're seeing something

25:01

similar is around like actual adaptive UI. So

25:04

we I think especially complicated products,

25:07

you know, if you think about interacting with AWS

25:09

is console and

25:11

the like 1000 products that they have in

25:13

there, you know, you're kind of faced sometimes

25:15

with this choice of am I in beginner

25:17

mode or I'm a expert mode. But if

25:19

we are using actual generated AI, we can

25:22

actually create adaptive UI to kind of learn

25:24

what is the right set of options to

25:26

provide something. I think Microsoft, they tried like

25:28

a very rudimentary version of this and

25:30

office years ago with an adaptive ribbon

25:32

didn't really work out. But I could

25:34

see essentially, sort of all you eyes

25:37

becoming a lot more personalized based on how

25:39

people actually interact with the tool and what

25:41

their needs are. Yeah, our vision

25:43

for the web is a web that's very

25:46

dynamic and personalized and fast everywhere. And this

25:48

has been the journey for the last, you

25:50

know, eight years of resell and will be

25:52

the journey for the next eight years as

25:54

well too. Just how do we continue making

25:56

this better and better and better, faster,

25:58

more personalized, more dynamic. But

26:01

to step back a little bit, I love what

26:03

you're talking about around adaptive interfaces. I want to

26:05

talk a little bit about how we've been thinking about

26:07

this and what we've been building towards here. So I

26:09

talked about at the beginning of 2023, we started to

26:12

see a

26:14

lot of developers wanting to build AI

26:16

applications on Vercel. So at the same

26:19

time, what we decided to do going

26:21

back to our principles of developer experience

26:23

was, how can we build a

26:26

set of tools or a framework that

26:28

help developers use

26:30

AI models, use large language models,

26:33

use AI tools on Vercel easier?

26:35

And we built what we've called

26:37

the AI SDK. And

26:39

it essentially simplifies how

26:42

you can add in a chat-based interface

26:44

or how you can connect to OpenAI

26:47

or Anthropic or Hugging Face or

26:49

all of these popular services for

26:51

using an AI model into just

26:53

a few lines of code in

26:55

your JavaScript or TypeScript or React

26:57

or Next.js application. And what

27:00

we learned from this first experience was

27:02

one, that developers really appreciated having an

27:05

easier way to integrate this technology and

27:07

just write a few lines of code

27:09

and connect to their large language model.

27:11

But we also learned that the transport

27:14

format of that was primarily

27:17

just text-based and it left an opportunity

27:19

for improvement. So you talk to your

27:21

large language model, you send it some

27:23

text, it returns back some text. Well,

27:26

the great thing about a Google search experience

27:28

when I search for what's the

27:30

weather in this location or what restaurants are

27:32

in New York City is they can give

27:34

you these rich interactive widgets. Like when you

27:37

search for plane tickets, and you can actually

27:39

do the whole process in the widget and

27:41

then click one button and it takes you

27:43

out to United or American or something, for

27:45

example. And that experience

27:48

hasn't really been democratized yet for

27:50

large language models. The

27:52

amazing incredible thing about Google search

27:55

is they've built such a dynamic

27:57

personalized application with all of these

27:59

widgets. and it's still

28:01

so fast. There's just no way

28:03

they could have possibly known about every

28:05

single possible widget combination that you needed

28:07

to build. It has to be very

28:09

personalized based on the user's location, what

28:12

they've searched, et cetera. So we're trying

28:14

to take that idea and

28:16

democratize it to everyone. And the

28:18

first step in doing that was

28:20

actually taking our AISDK and upgrading

28:22

it so that it could output

28:25

actual interactive components or widgets as

28:27

well too. So you ask a

28:29

large language model like open AI, GPT, you

28:31

ask it, how can I build

28:34

out a website that looks like Stack

28:36

Overflow or it looks like Sketchers or

28:39

something. And it would be great if

28:41

what got spit out were actual components,

28:43

whether it was you're actually trying to

28:45

build this UI or you're trying to

28:48

build kind of an interactive widget for maybe answering

28:50

support tickets and you want to render what the

28:52

flights are for the support tickets. And

28:55

in doing this, we realized, there's

28:58

a really interesting opportunity here

29:00

to simplify how developers are

29:02

building UI as well as

29:04

giving them this underlying primitive of generating

29:07

components. So this kind of put a fork in

29:09

the road. On one hand, we started building

29:11

a product that we've now released, which is called

29:13

V0. And this product

29:15

actually uses all of this tech that I

29:17

just talked about. So you go to V0

29:20

and you type in, it is an AI

29:22

first user experience. There's just a search

29:24

bar. There's just a box. You type

29:26

it in the box. I'd like to

29:28

create a dashboard for my internal application that

29:31

has a table on the right, shows the

29:33

customers. There's some buttons for how I can

29:35

edit or modify their invoices. On the left

29:37

have a list of items, have a search

29:39

bar, whatever product requirements you have.

29:41

You hit enter and it's gonna use

29:44

this AI SDK. It's gonna talk through our

29:46

model and it's going to

29:48

stream back. It's gonna progressively show

29:51

the UI components that it generates. And then

29:53

when it's done, you can just click a

29:55

button and copy paste and use that in

29:57

your application to quite literally create the V0.

30:00

to create the first version. So that's like

30:02

one fork in the road. We can talk a

30:04

little bit more about that. But then second is

30:06

now that we built this functionality into the AISDK

30:08

as well as some other things like open

30:11

AIS function calling and some of the more advanced

30:13

things like the assistance API. It's

30:15

really interesting to see how developers are taking

30:18

that and then using it to build these

30:20

types of experiences that previously were super hard

30:22

to do. Like when you click on the

30:24

chat widget in the bottom right on

30:27

Expedia and you wanna talk to customer

30:29

support because you wanna change the duration

30:31

of your trip. You wanna modify

30:33

one part of your hotel and

30:35

it can actually spit back out

30:37

this little widget that shows your

30:39

reservation. It shows the dates and

30:41

you can actually interact with a

30:43

real UI. That's more like this

30:46

adaptive UI future that you were

30:48

talking a little bit about versus

30:50

always doing everything text-based. Right, yeah,

30:52

I mean sometimes the best

30:54

answer to a question is a picture. And

30:57

so that's something that we can

30:59

do in the digital world. We

31:01

can go beyond just sort of

31:03

a verbal or textual response. And

31:06

I mean there's a reason why people say pictures

31:08

versus sounds or a video or something like that.

31:10

I think one of the worst experiences

31:12

you can have is when you ask somebody for directions

31:15

and the way they describe complicated directions

31:17

is like oh, they're mentioning some reference

31:19

that you've never heard of, you turn

31:21

right there. Humans are kind

31:23

of bad at giving directions but like a map

31:26

with a drawn out set of directions is super

31:28

useful. So if you can essentially go from

31:30

the sort of crappy human version to

31:33

actually bridging the gap or using

31:35

sort of the way humans naturally interact when

31:37

that interaction makes sense in his better experience,

31:40

then you can leverage that. But if a

31:42

picture, a widget or whatever is a better

31:44

experience then you can sort of optimize for

31:46

that and create this adaptive personalized experience. I've

31:49

been trying to tell my dad this when he

31:51

explained to me directions for his whole life. He's

31:54

like oh yeah, I'm from a small town. He's

31:56

like oh yeah, you just go down that gravel

31:58

road then you go north and then. After the

32:00

third house on the right, it's got the red

32:02

barn, then you take a left, and then that's

32:04

old Bob's place. It's like, come on. Exactly. It's

32:07

like too much detail. Yeah. Give

32:09

me the facts. Yeah, exactly. Ruttersack is the

32:13

warehouse native

32:16

customer data

32:18

platform. With

32:27

Ruttersack, you can collect data from every

32:29

source, unify it in your data warehouse

32:31

or data lake to create a customer

32:33

360 and deliver it to every

32:36

team and every tool for activation. Ruttersack

32:38

provides tools to help you guarantee

32:40

data quality at the source, ensure

32:42

compliance across the data lifecycle, and

32:45

create model-ready data for AI and

32:47

ML teams. With Ruttersack,

32:49

you can spend less time

32:51

on low-value work and more

32:54

time driving better business outcomes.

32:56

Visit ruttersack.com/SED to learn more.

33:09

So, you know, you mentioned v0. So

33:12

like there's this growing AI stack.

33:14

We have vector databases, foundation

33:16

models, orchestration tools like RankChain,

33:18

your AI SDK. Where

33:21

do your investments kind of

33:23

sit in this now emerging

33:26

LOM tech stack? Yeah.

33:29

So we've always cared a lot about the

33:31

actual framework that's enabling the developers, which is

33:33

why the first bits that you saw us

33:35

work on were the AI SDK, similar to

33:37

how in the front-end framework

33:40

space we work on Next.js. So

33:43

the AI SDK is like our

33:45

investment into that tooling for developers.

33:47

On the product side with v0,

33:49

it's trying to build a product

33:52

that makes it very easy to get that first

33:54

version of your UI. But

33:56

in terms of the entire landscape and

33:58

how we want to. work with the

34:00

growing number of providers that pop up every

34:02

single day. There's new open source models, it

34:05

seems like, basically every week

34:07

there's a new breakthrough, which is super exciting.

34:10

We've tried to build the SDK in

34:12

a way that it can integrate with basically any

34:14

of these providers. And then also, as we're continuing

34:16

to make the product better for V0, as

34:19

there's new technology, new breakthroughs that are released, we're

34:21

able to integrate that back into the product. So

34:23

for example, I don't know if it'll be out

34:26

by the time this is posted, maybe it'll be

34:28

shortly after, but you'll have the ability to upload

34:30

images to V0. And then rather than typing things,

34:32

you can just use an image,

34:34

which is even easier sometimes. And then

34:36

I guess the last bit here is

34:38

just, Vercel as a whole, as a

34:40

front-end cloud for all of our customers, how

34:43

are we helping them accelerate

34:45

their AI adoption journey? And

34:48

the great thing about having this kind of headless

34:50

composable architecture is that because we're

34:52

just focusing on the front end

34:54

first, we can integrate with the experience

34:57

first. We can integrate with whichever tool

34:59

these customers want. Sometimes it's their own

35:01

kind of private version of an open

35:03

source model that they've trained on their

35:05

own data that they've put behind their

35:07

private infrastructure, behind an API, but they

35:10

still want to integrate it into their

35:12

product in UI and front-end experience. And

35:14

we can still help them accomplish that.

35:17

What was the reception when you

35:19

first teased V0? I

35:21

think it was originally launched as sort of like

35:23

private data. The

35:26

initial reception was really

35:28

interesting because a lot of developers, I think,

35:31

struggle from a problem that I've also struggled

35:33

with, which is it's almost like

35:35

writer's block for building out your user

35:38

interface. You kind of start to learn

35:40

what good looks like after doing front-end

35:42

development for a while, but going from

35:44

zero to one is actually harder than

35:46

it seems sometimes, especially

35:49

with CSS. CSS

35:52

is a very deep,

35:54

deep subject. There's a

35:56

lot to learn to be very proficient

35:58

and masterful with. with CSS and

36:01

for some of our best engineers at

36:03

Vercel, some of the people that I've

36:05

worked with who are just incredible engineers,

36:07

both on front end as well as

36:09

on back end, some of them still

36:11

aren't the best CSS experts in making

36:13

something look really beautiful, really polished.

36:16

And the cool thing that we think V0

36:18

can help accelerate for them is getting

36:20

a version of their UI that's actually

36:22

a good place for them to

36:24

start aesthetically. It's actually pleasing to look at,

36:26

it's got the right break points for mobile,

36:29

which is sometimes can take quite

36:31

a bit of time to do for

36:34

how they actually visualize what the configuration

36:36

for putting all these pieces together would

36:38

be. So the initial reaction I think

36:40

was really focused on that. And

36:43

also how they could then

36:45

use this technology to speed up their

36:47

workflows that they do every day. For

36:50

us internally at Vercel, it's been

36:52

really useful to use V0 to

36:54

especially build parts of our tooling

36:57

internally that isn't as much of

36:59

our main customer facing products. It's

37:01

actually a lot of the internal

37:03

tooling because you're

37:05

trying to get a very specific need

37:08

done. And you want a

37:10

UI that looks good, that's easy to

37:12

use for all of the internal employees

37:14

building something. And typically with

37:16

internal tools with, let's throw it in the

37:19

bucket of tech debt. When

37:21

you're trying to make updates to tech

37:23

debt type things, to continuous improvement type

37:25

things, to some little configuration to change

37:27

a support value, you're probably

37:30

not gonna spend that extra mile on

37:32

really getting a great UI. At least

37:34

for me, my career, most of the

37:36

internal dashboards or tools that you see aren't

37:38

the best design things. Yeah, you'll be like,

37:40

oh yeah, this was definitely designed by an

37:43

engineer. Yeah, yeah, yeah. Or minimum scaffolding to

37:45

kind of hold this thing up. It's a

37:47

plain input with no CSS, right? It's like,

37:49

it's just the most simple thing ever, which

37:52

sometimes that's fine, right? But the cool thing

37:54

about V0 is

37:56

that you're able to chat with this model

38:00

to get a better looking version of

38:02

these tools, which can have some serious

38:04

impacts on the productivity of your engineering

38:06

and your support teams when they're able

38:09

to more quickly build internal tools and

38:11

refactor internal tools and still own all

38:13

of that code and that experience internally

38:16

if you need to connect to private

38:18

APIs or you need to add in

38:20

some custom logic. It's been a powerful

38:23

combination for us. Do you think tools

38:25

like the Xero, GitHub, Copilot, these other

38:27

tools that are making it easier and

38:29

easier for people to put in a prompt

38:32

and spit out code that you can actually

38:34

copy and paste and use? Is

38:36

that changing what it means to be an

38:38

engineer and developer in any way? Or maybe

38:40

it isn't right now, but if we fast

38:42

forward a year or two, or if

38:45

we continue at this exponential growth curve in

38:47

AI, is it going to change what it

38:49

means to be an engineer? Yeah,

38:51

I think it's already happening. I know for me,

38:54

AI tooling has accelerated the

38:56

rise of copy paste and

38:59

copy paste as a good thing, not as a

39:01

bad thing. I think maybe

39:03

traditional engineering software design

39:05

wisdom, a lot of folks have

39:07

really stressed the importance of drive,

39:10

don't repeat yourself, try to create

39:12

these perfect abstractions. And

39:14

in a AI first world,

39:18

abstractions are still good, but there's

39:20

also a lot of value in

39:22

being a little bit more explicit

39:24

and making it very easy to

39:26

copy paste and distribute

39:28

code throughout your application. For example,

39:30

at least from my experience now,

39:33

becoming more of an AI

39:35

engineer, using more AI tools in my

39:37

day to day workflow, I find

39:40

myself spending a lot of time in a workflow

39:42

that looks something like this. I have

39:44

an idea for an improvement I want to make to my site

39:46

or a new product that I want to build. I'm

39:48

kind of pairing, I'm pair programming

39:50

with v0 and chat

39:53

GPT or my favourite large language model.

39:55

I'm able to use v0

39:57

to help scaffold out that first version.

40:00

of the UI. I recently built like an email

40:02

clone and I just kind of typed in the

40:04

first version of what I wanted this thing to

40:06

look like, gave me the code, copy pasted it.

40:08

Okay, now I'm on to the logic.

40:10

Okay, now I need to when you click on

40:12

the button, I want to have a form, that

40:14

form is going to add a new row into

40:16

my Postgres database. So now I'm pair programming with

40:18

chatgbt. Okay, spit out a SQL query that's going

40:20

to allow me to do this, but also sort

40:22

by this and limit to 100. And

40:24

also do two joins on these other tables.

40:27

Does this schema for my database look correct?

40:29

Okay, awesome. Just checking just want to make

40:31

sure that's what the workflow has

40:33

kind of looked like for me, which is like

40:35

having this very, very experienced programmer who's way better

40:37

at database design than I am, sit beside

40:39

me and help me build a great product.

40:42

Yeah, and I think one of the other nice things

40:44

about this type of tooling is there's kind of like

40:47

no judgment involved. So if you're, you know, trying to

40:49

learn something new, you might be a little bit nervous

40:51

or intimidated to go and, you know, ask like an

40:53

expert is like, you know, am I wasting this person's

40:55

time or, you know, depending on who that person is,

40:57

maybe they are, you know, better

40:59

or worse at like, you know, coaching

41:02

and helping people. So this is like,

41:04

you know, you're probably going to I

41:06

guess, like the bedside manner of the

41:08

chatbot experience is going to be something

41:10

that you can probably deal with. Unless

41:12

you add in custom instructions into GQT. And you

41:14

can say, you know what, actually, like, just mess

41:17

with me a little bit. Like, here's the result.

41:19

I'm surprised you didn't know that. But none of

41:21

that's like, okay, take

41:23

on the personality of an arrogant, yeah,

41:25

professor that's been in the space for

41:27

40 years. Something like

41:29

that. Yeah. So

41:32

in terms of v zero, can you talk

41:34

a little bit about what's going on under the hood?

41:36

Like, how is the model built?

41:39

Are you using existing open source

41:41

AI tools? Like, how are you

41:43

sort of also translating presumably what

41:45

is a textual response into these

41:47

like widgets? Yeah, so

41:50

I talked a little bit earlier about

41:52

the SDK and how that's kind of

41:54

been the foundational part of what makes

41:56

v zero successful. The v zero tech

41:58

stack is actually It's using the

42:01

AI SDK, it's using Vercel's DX platform

42:03

for like how you actually build the

42:05

project using our managed infrastructure to scale

42:07

it. It's using Vercel KV for a Redis

42:09

database. It's

42:12

using pretty much every single part of the products

42:14

that we sell to customers. So it's the ultimate

42:16

dogfooding of our tooling. So,

42:18

you know, a customer goes to V0, they

42:21

have an idea for the next big thing. Great.

42:24

They type in the search box or they

42:26

soon upload an image. It

42:28

talks to our model, which is

42:30

a combination of tools that then

42:33

can spit out essentially first version

42:35

of their UI that is using

42:37

Tailwind and ShadCN UI,

42:40

which is this open source library that

42:42

we think is great, built on top

42:45

of Tailwind CSS, an accessible

42:47

set of primitives called Radix, as

42:50

well as then React code as well too. So it spits out this

42:52

code where you can copy paste these

42:55

accessible React components, but then also

42:57

just vanilla HTML that

42:59

uses Tailwind if you want to. The great thing about

43:01

Tailwind here, by the way, is that there's a co-location

43:03

of the styles in the HTML, so

43:05

it makes it very copy paste friendly. So

43:08

you get that first version and then we made it in

43:10

such a way where you can literally copy paste

43:13

or you can run an MPX command to

43:15

install the components and then

43:17

put that code into your application. And then

43:20

what about customization? Like if I have my own

43:22

sort of brand guidelines, can I take into account

43:24

my company's colors and maybe

43:26

an existing style sheet? Yeah, we haven't built

43:29

this yet, but it's definitely something we want

43:31

to do, the ability to customize

43:33

the output based on your

43:35

own, like bring your own style guide, bring

43:37

your own design system. The way it works

43:39

today is that the primitives that we're building

43:41

on, the UI component library that's

43:43

integrated with the output, it does

43:46

give you some levers around I want to change

43:48

my spacing style and my topography style and my

43:50

color palette and all of these different

43:52

things to give you a

43:54

pretty unique look and feel. There's some good

43:56

examples of how this works. And Tailwind also

43:59

is pretty customizable. in terms of all

44:01

of the different scales in your design system.

44:04

But I think what will make this even better

44:06

is if you can actually literally

44:08

bring the components that are already in

44:10

your existing potentially private design system as

44:12

well too, and then output those. For

44:15

example, we use v0 as well for

44:17

internal components that are part of our

44:19

design system, our component library to build

44:22

for cell as well too. So yeah,

44:24

I think that will be a very

44:26

exciting addition. Is this

44:28

primarily focused on or limited

44:31

to component design? I want a

44:33

better checkout page, or can I

44:35

go and actually create a multi-page

44:38

experience with this? Yeah, so

44:40

today it's focused on

44:42

a single route, a single page. And

44:44

inside of that page, there may be

44:46

multiple components. So it's more than a

44:48

singular component. It can do a full

44:50

page. I would say it's the best

44:52

at singular focus component today. When it

44:54

starts to do a full route, it's

44:56

a little bit like there's more things

44:58

that need to be stitched together. And

45:00

it might take a few more follow-up

45:02

iterations or additional prompts. Typically, when I'm

45:04

building a full page, like I'm building

45:06

this full email client, I think it

45:08

took like five prompts for me to get something

45:10

I was really happy with. First prompt was pretty good. But

45:12

then I was like, actually, move this

45:15

sidebar up a little bit, add an icon in the

45:17

top right, this other thing here. So it's OK to

45:19

have a few more iterations on top of there. In

45:22

the future, we might try to

45:24

make it easier to design that

45:26

multi-page experience. But we're waiting to

45:29

see how things shake out after

45:31

early customer feedback. Yeah. And I

45:34

imagine more things can go wrong, the

45:36

bigger the scope of it. You're

45:38

essentially creating a bigger surface area

45:40

for potential errors or mistakes. Yeah.

45:43

The best way to get great results is

45:45

to be as specific as possible, which is

45:48

how a good workflow for

45:50

using other large language model

45:52

tools is you

45:54

get a very specific prompt. You can

45:56

actually ask the large language model to

45:58

critique your own business. prompt like, how can

46:00

I make this prompt a little bit better? Is there

46:03

anything missing? There could be ways

46:05

for us to essentially build that loop

46:07

into v0 itself, especially when you're considering

46:09

uploading an image of how you want

46:12

something to look, it can then kind

46:14

of continuously reevaluate, am I close to

46:16

this, like looking at the output versus

46:19

what the, you know, initially uploaded image

46:21

was. So what's next? You know,

46:23

you kind of touched on some of these things,

46:25

but are there other investments in the AI space

46:27

that you're working on that you can share? Yeah,

46:30

I think there's still a ton of opportunity for

46:32

us to continue making the output of

46:34

v0 as useful

46:37

as possible. I think really fine tuning

46:39

how we can create an experience that

46:41

will, you know, on as few number

46:43

of prompts as possible, you're going to

46:46

get to something that you're ready to

46:48

copy paste and bring into your application.

46:51

There's still so much that we can do there.

46:53

So we're going to keep seeing what our

46:56

first customers of v0 here who have been

46:58

using it to build some pretty awesome products

47:00

and some pretty awesome UIs, how

47:02

we can make their lives a little bit easier. So

47:04

that's kind of on the v0 side on the AI

47:06

SDK side, we're just starting to

47:08

explore the new open AI assistance API, I

47:10

believe we just launched some experimental support for

47:12

actually using that through the SDK, lots of

47:14

cool stuff happening there. A lot of cool

47:17

stuff happening in the open source space and

47:19

open source models that you can bring in

47:21

and kind of self host or hosting your

47:23

own infrastructure. So we want

47:25

to make the AI SDK really the place where

47:27

you can integrate with all of these tools. So

47:30

that's that. And then on Vercel, we're just

47:32

trying to make it as easy as possible for you

47:34

to accelerate your product development,

47:37

accelerate bringing AI into your product with

47:39

as low as friction as possible. So

47:41

giving you the frameworks and the tools,

47:43

whether it's AI SDK or v0, but

47:45

then also, you know, if you want

47:47

to bring your own back

47:49

in that's using your own AI model

47:51

that has, you know, some specific set of constraints,

47:54

we still want to support that as well too.

47:56

Okay. And then if people want to learn more

47:58

about this or get in contact with with you,

48:00

what's the best place to do that or how should they do

48:02

that? Yeah, the AISDK and V0,

48:04

if anyone has feedback about those, I would

48:06

love to talk more about them. If you're

48:09

using V0, if you're using the SDK, V0.dev,

48:12

if you wanna go try things out,

48:14

then sign up. Happy to help with

48:16

anyone there who has questions. On Vercel in general,

48:19

if you go to vercel.com, we

48:21

have recently updated our homepage, so you should check

48:23

that out. I'm really happy with that. And

48:26

you can also reach out to me directly on

48:28

your favorite social media platform or

48:30

lee at vercel.com, I'm happy to talk with

48:32

you as well. Awesome, well, Lee, thanks

48:35

so much for being here. I'm glad that we're

48:37

finally able to connect in some fashion. I've been

48:39

a big fan of Next.js for years. I use

48:41

it for pretty much all my sort of demos

48:43

and side projects and use the Vercel platform. And

48:46

I'm really excited about a lot of the work that you're doing

48:48

in the AI space. I think it's different than a lot

48:51

of what other people are necessarily focused

48:53

on. And I think that this kind

48:55

of laser focus on the front end,

48:57

making AI accessible to any developer is

48:59

something that is needed. And it's

49:01

kind of the way of the future for how we're gonna build

49:03

a lot of these types of applications. Yeah. Thanks,

49:06

Sean, I appreciate that. Yeah, it's been fun. All

49:08

right, thank you. Cheers. Yeah, thanks for having me.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features