Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
country was built on a distinctly
0:02
American work ethic. But today, work
0:04
is in trouble. We've outsourced most
0:06
of our manufacturing to other countries.
0:08
And with that, we sent away good
0:10
jobs and diminished our capability to
0:12
make things. American Giant is a clothing
0:14
company that's pushing back against this
0:16
tide. They make a variety of high
0:18
quality clothing and active wear. like
0:20
sweatshirts, jeans, dresses, jackets, and so much
0:22
more. All made right here in
0:24
the USA, from growing the cotton and
0:26
adding the final touches. So when
0:28
you buy American Giant, you create jobs
0:30
for seamstresses, cutters, and factory workers
0:32
in towns and cities across the United
0:34
States. And it's about They
0:39
stitch people together. If all that sounds
0:42
good to you, visit American dash
0:44
giant.com and get 20% off your
0:46
first order when you use code
0:48
staple 20 at checkout. That's 20%
0:50
off your first order at American
0:52
dash giant.com with promo code staple
0:54
20. just realized your business
0:56
needed to hire someone... yesterday. How
0:59
Yesterday! How can you find amazing
1:01
candidates fast? Easy. Just use indeed.
1:03
Stop struggling to get your job
1:05
post seen on other job sites.
1:07
With indeed sponsored jobs, your post
1:10
jumps to the top of the
1:12
page for your relevant candidates. So
1:14
you can reach the people you
1:16
want faster. According to Indeed
1:18
Data, sponsored jobs posted directly
1:20
on Indeed have 45% more
1:23
applications than non-sponsored jobs. Don't wait any
1:25
longer. Speed up your hiring right
1:27
now with Indeed. Don't wait any longer. Speed up your hiring right now with indeed. And listeners
1:29
of this show will get
1:31
a $75 sponsored job credit
1:34
to get your jobs more
1:36
visibility at indeed.com/P-O-D-K-A-T-S-12. Just go
1:38
to indeed.com/P-O-D-K-A-T-T-S-12 right now and
1:41
support our show by saying
1:43
you heard about indeed on
1:45
this podcast. Terms and conditions
1:48
apply. Hiring conditions apply. Hiring,
1:50
indeed is all you need. Hey,
1:57
folks, welcome back to another
1:59
episode of JavaScript Jabber. This week
2:01
on our panel, we have
2:03
Steve Edwards. You had to think about
2:05
which one you were going to go to there for a second, didn't you? Well,
2:08
didn't want to say dad joker. Oh,
2:10
sorry. That's what my brain said. Oh, well,
2:12
that's what most people think. I'll do my
2:14
AJ and say, yo, yo, yo, come in
2:16
at your live from cold and rainy Portland.
2:19
We also have Dan Shapir. high
2:22
from a freezy Tel Aviv
2:24
at 50 degrees Fahrenheit, 9
2:26
degrees Celsius, which is, you know, literally freezing
2:28
for us. That's freezing. That would be,
2:30
that's above where I'm at now in the
2:32
middle of the day. Yeah,
2:34
I'm Charles Maxwood from Top Bed
2:37
Devs and yeah, it's, it's 40 something
2:39
degrees here. Yeah. And I went
2:41
for a walk. So, yeah, no,
2:43
it's better for us. We
2:45
also have Lee
2:47
Robinson. Lee, welcome back.
2:50
It's good to be here. It's
2:52
58 degrees Fahrenheit here, which is
2:54
like a sauna because it was
2:56
negative Fahrenheit like last week. Wow.
2:58
I'll take it. Yeah. I
3:01
was going to say I'm going to come out there. But
3:03
if you're getting negatives, forget it. Yeah.
3:05
That's classic Midwest. It's
3:07
very cold. Yeah. So it's
3:09
been what a year or so since
3:11
we had you on. What's new? A
3:15
lot. There's lots we can talk
3:17
about through Next .js, Purcell, V0, and
3:19
AI. All sorts of fun stuff.
3:22
It sounds like you're mostly doing the same kind of thing
3:24
that you were doing last time. Yes.
3:26
Yes, definitely. I
3:28
was saying before we started that
3:30
it's pretty amazing to me that
3:33
such a small company, relatively speaking,
3:35
of course, because your multi -billion dollar
3:37
company with a few hundred employees,
3:39
but it's still relatively small, is
3:42
able to have such a huge
3:44
impact on web development. It
3:46
seems like everything is kind of revolving
3:48
around you these days in one
3:50
way or another. One
3:53
thing that I think has worked well for us
3:55
that Guillermo likes to call recursive founder mode, which
3:57
I think is kind of funny. If you see
3:59
in all the discourse about founder mode
4:01
is trying to hire former founders or
4:03
people who have that energy and
4:06
then giving them ownership and agency
4:08
over a domain and letting them kind
4:10
of run with it. A good example
4:12
of this is just last week, we
4:14
released a flags SDK for feature flags,
4:16
an open source toolkit you can use
4:18
to implement feature flags correctly in your
4:20
app using the server preventing layout shift
4:22
with any feature flag provider that you
4:24
want. And this is kind of a
4:27
founder led thing. We have Dominic who
4:29
is really owning this project from end
4:31
to end. It's a very, you know,
4:33
small team working on this, but small
4:35
teams can do amazing things when empowered.
4:37
and given the right direction and resources.
4:39
I was curious if it's based on edge
4:41
computing. So
4:43
we do really
4:46
recommend people to use
4:48
the server in general
4:50
because most of the jank that you
4:52
see in websites that have feature flags
4:54
is because they wait for the initial
4:56
document to get on the page, the
4:58
first bundle of JS to load, and
5:00
then they're evaluating what feature flags are
5:02
on. And you get that weird shift
5:04
of Maybe one experiment and then the
5:07
other, or you also see this with
5:09
authentication, like the logged out state and
5:11
then it switches to the logged in
5:13
state. So we are strongly pushing for
5:15
server side experimentation, server side,
5:17
uh, flagging. And to do that, generally
5:19
you want to put your flags close
5:21
to where your database is. So
5:24
it's not necessarily that the flags need
5:26
to live at the edge. Generally they'd
5:28
live at origin by your database, but
5:30
you can still do some things early.
5:32
in the routing layer at the edge.
5:34
Maybe that's redirects or rewrites. Maybe
5:37
that's pre -computing multiple different variants of
5:39
the page and putting the static shells
5:41
of those pages at the edge like
5:43
in a CDN layer so you can
5:45
fetch those early. So it's kind
5:47
of a bit of both. We
5:49
want to use the edge layer for
5:51
the static assets and then put
5:53
your data next to your compute
5:55
at origin. Yeah, what
5:58
some people actually do to reduce
6:00
shift in these scenarios when
6:02
they use client side is
6:04
to actually intentionally put a
6:06
render blocking scripts at the
6:08
very top of the page.
6:11
But that obviously comes at
6:13
a very high cost for
6:15
core vitals. Totally. I think
6:17
the biggest offender that I
6:19
see is teams who don't realize
6:21
that they're actually leaking all of
6:23
their flags on the client side.
6:25
They're using client -side experimentation, but
6:27
if you inspect the DOM or
6:29
you do some magic in your
6:31
DevTools, you can actually see every
6:33
single experiment or every single flag. That's
6:37
not necessarily bad. It's
6:40
not necessarily bad,
6:42
although sometimes I think
6:45
maybe on a large enough team,
6:47
the developer thinks that is a secret value,
6:49
and in reality, it is not. And
6:51
they don't realize that they're actually exposing that
6:53
to the client side. So it
6:55
can be fine if you know what you're
6:57
doing. I've just seen it misused a few
6:59
times. Don't put
7:02
credentials in future flags. Yes,
7:04
definitely. Don't
7:08
put credentials in your source code at
7:10
all. Yeah. One thing
7:12
that I want to just touch on
7:14
here because I love the idea of
7:16
kind of the internal entrepreneur.
7:19
I have to say that when
7:21
I've worked like contracts or
7:23
full -time jobs, A lot of times
7:25
it's, we're going to give you features and then
7:28
you're just going to implement them. And I
7:30
really like that open
7:32
feeling of, Hey, we need
7:34
this solution. Go
7:36
make it. That would
7:38
kind of be, if I didn't want
7:40
to take the entrepreneurial risk, that
7:42
would be my ideal job. Yeah,
7:46
we like to think about empowering
7:48
startups within a startup. So another
7:50
good example of this is our
7:52
product V zero dot dev, which
7:54
allows you to very quickly build
7:56
websites and web applications with AI
7:58
or just better understand how to use
8:00
web tools like next jazz and
8:02
react through AI. And that
8:05
team is very small and it's
8:07
led by a former founder who
8:09
really understands how to build great
8:11
products and has been given, you
8:13
know, a budget autonomy. to build
8:15
this product end to end and
8:17
really monitor success and own
8:19
their numbers and it's working pretty well. So
8:22
since you did
8:24
mention v0 and AI seems
8:26
to be a fairly popular
8:28
topic. Yeah, it's not
8:30
around here. What does AI
8:32
stand for? Okay. Apple
8:35
intelligence, I think. There you go. Yeah,
8:37
that's right. Anyway, what
8:40
can you tell
8:42
us about v0? I
8:45
mean, I seen it. I played
8:47
with it. It's really cool. But,
8:49
you know, what, what
8:51
can you tell us about it? Yeah.
8:54
So we've been working
8:56
on V zero for over a
8:58
year now. The first version that
9:00
we released was kind of our first
9:02
experimentation into the generative UI space,
9:04
taking these new large language models at
9:06
the time that we're getting really,
9:08
really good at being able to produce
9:10
code and figure out how we
9:12
could. help them better
9:14
craft the front end, you know,
9:16
do animations, do CSS. And
9:19
over time, we've just been slowly and
9:21
slowly adding more functionality. So
9:23
VZero now supports building full stack
9:25
applications. You can integrate through
9:27
databases, through Purcell. You
9:29
can add secrets or environment variables
9:31
through Purcell to connect to any external
9:34
services, maybe some AWS queue service
9:36
you want to use, could be really
9:38
anything. And we've also launched
9:40
a community that has published many
9:42
different V zero generations that you
9:44
can fork and get started with.
9:46
So what once started as a
9:48
very small tool to allow you
9:50
to quickly build some UIs is
9:52
now this kind of full featured
9:54
prototyping and app building platform that
9:56
allows devs to take an idea,
9:58
whether it's something they want to build
10:01
from a screenshot or a prompt or
10:03
a preexisting kind of template, take it, fork
10:05
it, prompt it, turn it into something
10:07
that they would like to build or they
10:09
would like to use. and then deployed
10:11
to Vercell in just a couple of clicks
10:13
and had that thing actually live on
10:16
the internet to share. And since
10:18
we've been slowly adding more of these
10:20
features, the growth trajectory of VZera has
10:22
been, it's been really wild to see.
10:24
Obviously, AI is very popular right now
10:26
and lots of people are interested in
10:28
getting excited in how these tools can
10:30
help them kind of level up their
10:32
careers as developers. But I think more
10:34
interestingly, bring in a whole
10:36
new. group of people into what
10:38
it means to be a developer. It's
10:40
less about writing the code for
10:42
these people. It's more about they have
10:44
great ideas and they now have
10:46
tools that actually allow them to build
10:48
those ideas. So think about,
10:50
for example, the product marketers, the product
10:52
managers, maybe some of the designers
10:55
who don't have as much coding experience.
10:57
A lot of them are being able to
10:59
use tools like v0 and actually build and
11:01
publish their ideas. So I have a
11:04
ton of questions about that. So first of
11:06
all, Do you want to start? No,
11:09
go ahead. Actually, let me go
11:11
ahead. I'm gonna get in. So
11:13
I don't worship at the React
11:15
altar myself. I'm a view dev
11:17
myself. And so I
11:19
tend to work more with you
11:21
and Laravel and inertia and stuff. So
11:23
I've heard Bercel talked a lot
11:25
about, but I don't really know it.
11:27
And I suspect there's a lot
11:29
of people listening who might not know
11:31
it as well. So before you
11:33
dive any more into Dan's
11:35
question that's wondering if you could
11:37
just give an overview on what Bercel
11:40
is and who its target audience
11:42
is, I guess. Yeah, absolutely. Bercel
11:44
is a developer cloud. So
11:47
we're trying to give you,
11:49
as a developer, all the
11:51
tools you need to build
11:53
websites and web applications from
11:55
hosting and managing your React
11:57
or view or spelt applications
11:59
to having observability into your
12:01
production infrastructure usage. integrating
12:04
with databases or other back -end services
12:06
and other cloud providers or hyperscalers,
12:08
bringing all of these tools into kind
12:10
of one place that you can
12:12
use as you put all your Lego
12:14
bricks together to actually be able
12:16
to build your amazing piece of art.
12:19
And that's for sell at a 50
12:21
,000 foot view. So now you just
12:23
mentioned V, but I'm sorry, you
12:26
mentioned Vue, but I'm looking at your
12:28
supported frameworks docs and I don't
12:30
see Vue listed anywhere. I see next
12:32
and next. I see next.
12:36
Your card.com You just realized
12:38
your business needed to hire
12:40
someone yesterday. How can you
12:42
find amazing candidates fast?
12:44
Easy. Just use indeed. Stop
12:46
struggling to get your job post
12:49
seen on other job sites. Don't
13:06
wait any longer. Speed up your
13:08
hiring right now with Indeed. And
13:10
listeners of this show will
13:13
get a $75 sponsored job
13:15
credit to get your jobs
13:17
more visibility at indeed.com/P-O-D-K-A-T-S-12. Just
13:20
go to indeed.com/P-O-D-K-A-T-T-S-12 right now
13:22
and support our show by
13:25
saying you heard about indeed
13:27
on this podcast. Terms and
13:29
conditions apply. Hiring conditions apply.
13:32
Hiring, indeed is all you need. And
14:34
yeah, we, we support, um, Nux
14:37
is the primary way we see
14:39
people deploying view applications to resell, but
14:41
we also support doing a more
14:43
traditional, just client only view application as
14:45
well. Generally we like to, um,
14:47
recommend Nux. I think they're building something
14:49
really great with the framework, but
14:51
also if you want to use a
14:53
like VIT and, uh, view application,
14:55
for example, that's also supported. Hmm. Okay.
14:59
So. Going back to V zero,
15:01
first of all, V zero, V
15:04
one. Yeah,
15:06
right. It's like, okay, as we continue
15:08
to iterate on it, do we bump about
15:10
the version number? No, the,
15:12
the naming it's, it's kind of
15:14
funny. It's going to go name squat
15:16
that V one. Yeah. Yeah. Yeah. I
15:19
didn't expect this to happen, but we've
15:21
seen a lot of people have a play
15:23
on the name of V zero. So
15:25
we saw an email zero. I
15:28
think there's a YC company now
15:30
doing A0 or something like that. So
15:32
it's been fun to see other
15:34
people take this idea of GenUI and
15:36
apply it to other domains, emails,
15:38
mobile apps, et cetera. But
15:40
really, the whole intention behind
15:42
the name was you can get
15:44
started here. It's not necessarily
15:47
saying it's going to replace your
15:49
production code. It's not
15:51
going to, you know, Prevent the
15:53
need from still writing a lot
15:55
of code. It's helping you get
15:57
started quickly and building something great
15:59
By the way, there's the void
16:01
zero that Evan use your company.
16:03
Yeah Yeah, it was funny. I
16:05
think that name was after v
16:08
zero, but then it's pretty pretty
16:10
similarly named which is kind of
16:12
funny Yeah, so so the fact
16:14
that it's v zero kind of
16:16
And also looking at the demos
16:18
that you primarily show, it seems
16:20
to indicate that what you're building
16:22
is literally version zero of a
16:24
product. That it's
16:26
taking either an idea,
16:28
a concept, like you
16:31
said, maybe an image
16:33
or a screenshot. Does
16:36
it also support Figma or
16:38
something like that? Yeah.
16:40
Yeah. You can paste in a
16:42
Figma link and it can
16:44
understand your design system colors and
16:46
what's on the page. And
16:48
then start building a UI from
16:50
there. Yeah. And basically it
16:52
turns that into code that interactively
16:54
replicates the design that you
16:56
gave to it. Correct? Yeah.
16:59
Yeah. The big difference in the way
17:01
to think about this versus a
17:03
no code or a low code tool
17:05
is generally with the previous generation
17:07
of tools, kind of pre AI was
17:10
the code that you got out.
17:12
from the low code or no code
17:14
tool was not necessarily production quality
17:16
code. It was not code that was
17:18
using the popular frameworks of the
17:20
world that you could then take and
17:23
kind of eject and actually start
17:25
building your application into something that was
17:27
a real app. It was, it
17:29
was more so like a, maybe just
17:31
a big blob of HTML with
17:33
some in mind JS in there. What
17:36
it wasn't, you know, using the
17:38
ecosystem of the MPM libraries that we
17:40
have today versus kind of rebuilding
17:42
everything from scratch. And it wasn't also
17:44
built on top of. reusable, accessible
17:46
component primitives, which is something that v0
17:49
does use, which is a component
17:51
distribution system called ShadCN UI. Yeah,
17:54
I'm familiar, of course, with ShadCN. ShadCN?
17:59
ShadCN. It's a
18:01
component library, UI component library,
18:03
popular one, probably the most
18:06
popular one. It has a
18:08
unique approach because the traditional
18:10
way of distributing component libraries, think
18:12
about maybe material UI is a
18:14
very popular one, for example, from
18:16
Google. They build these
18:19
abstractions that are very good. They
18:21
get published as MPM packages, and then you
18:23
pull them down and you use them in
18:25
your app. But what happens when you want
18:27
to take that and transform it to your
18:30
own design system, your own brand? Well,
18:32
you can extend them to an amount.
18:34
but you don't control the code. You're
18:36
using the subtraction through MPM and you
18:38
can't go in and actually modify the
18:40
source code very easily. Of course, there's
18:43
hacks, right? What Shazian is trying to
18:45
do is it's more of a component
18:47
library distribution system or a way for
18:49
you to build your own component library.
18:51
So it gives you all of the
18:53
code. You can effectively copy paste it
18:55
into your, you know, into your editor,
18:58
into your project, tweak all the tokens
19:00
and the design system colors and install
19:02
packages, delete packages. And then there's
19:04
a nice CLI that helps you add new
19:06
components as you need them. I
19:08
got you. I have to say,
19:10
I went on there and I mashed the button for
19:12
a landing page. And it
19:15
gave me a next
19:17
app. And
19:20
then I clicked preview and it said it
19:22
couldn't load because I couldn't find the CSS file.
19:25
But I mean, the code looks good.
19:27
Yeah, it's just a little. But, but
19:29
it's AI and it generates and sometimes
19:31
misses things. I've seen. Yeah. Yeah. You
19:33
should try the, is there a fixed
19:35
button in the bottom left? Um,
19:39
ideally, if there's an air thrown, there'll be a
19:41
little fixed button, which is a good time to
19:43
talk about the general philosophy of these tools. Part
19:45
the reason why we called it V zero two
19:47
with, with these, um, you know, non
19:49
deterministic AI systems, they're going to get
19:51
things wrong, right? There's, it's never going
19:53
to be perfect. And
19:55
yeah, it's generative. It predicts the
19:57
next. Word or in this case, the
19:59
next token of your code. Yep.
20:02
And so it'll miss stuff. Yeah. Totally.
20:04
So one thing we've tried to
20:06
do from the unbounded space of you
20:08
can generate any code in the
20:10
world to try to narrow that down
20:12
into something that's predictably outputting consistent
20:14
UI is we try to add a
20:16
lot of systems and pieces in
20:18
place to narrow in and improve the
20:20
quality ratio. Cause ideally at the
20:22
end of the day, as a consumer
20:24
of this product, you want quality
20:26
generation, both in the. UI itself, but
20:28
in getting working functional code. And
20:31
the biggest lever that we've done to
20:33
do that is really focusing in
20:35
on our niche, which is React applications.
20:37
That's the pipeline where we're reviewing and ensuring
20:39
that the data that goes into the
20:41
system is really high quality and also reviewing
20:43
the output if somebody has a bad
20:45
time. So yeah, I'm curious if that
20:48
works for you. So a few
20:50
questions about that. First of
20:52
all, which model are you using?
20:54
There's a bunch, actually. Generally,
20:56
you can think about it
20:58
as a big decision tree, because
21:01
the user puts in a
21:03
prompt. That prompt, we first kind
21:05
of have to classify what they were even
21:07
trying to do. Were they trying to just
21:09
ask a knowledge question? Were they wanting to
21:11
generate some code? There's a lot of branches
21:13
that can go down. And then even as
21:15
it goes down the branches, We
21:17
might want to flip between models on
21:19
the fly depending on the quality of
21:21
one model or maybe a new model
21:23
comes out. I think the new cloud
21:25
model was just released today, for example. So
21:28
we try to make this system that
21:30
helps us get the best quality output.
21:32
And then we abstract away everything else
21:35
in the middle and just focus on
21:37
having really high quality data in and
21:39
evals or tests to ensure that the
21:41
quality is predictable. or you
21:43
could use DeepSeek if you don't mind
21:45
a bit of Chinese propaganda on your
21:47
page. Well,
21:50
maybe, but you can
21:52
also run DeepSeek. I
21:54
think it's the unbiased
21:56
on like Olama or
21:58
something. And then you
22:00
don't have all the filters in front of
22:02
it. I mean, they still trained it
22:04
on their own data. And so if you
22:06
ask it questions about Chinese history that
22:08
are inconvenient, it just doesn't give you a
22:10
correct answer because it has no idea.
22:13
But yeah, I think a lot of the
22:15
enforcers were coming at the like inference
22:17
time compute, the test time compute was where
22:19
they were doing a lot of the
22:21
cloud hosted version of deep seek was where
22:23
they were doing a lot of the
22:25
biasing towards the answers that they didn't want
22:27
to answer. So the open source models
22:29
definitely in the training data, they are still
22:31
biased in a certain direction. But you
22:34
can get you can get better
22:36
results by doing it that way, which I've
22:38
already seen a few models on on hugging face.
22:40
And I think there's like a llama distilled
22:42
version as well. So I played
22:44
with I played with Deepsea. Can
22:46
I play with some of the
22:48
other ones? And it is really,
22:50
it's really good in a lot
22:52
of areas. So the name V
22:54
zero seems to indicate that the
22:56
focus on the product is on
22:58
the initial generation and less so
23:00
about taking an existing app and
23:03
modifying it or adding capabilities into
23:05
it. Is that correct or is
23:07
that just my interpretation? That
23:10
was definitely where we got started as
23:12
we've added more functionality to the product.
23:14
We're now getting to a point where,
23:16
you know, in the future, you'll be
23:18
able to connect to a Git repo
23:20
and bring in the code you already
23:22
have in your application and make changes
23:24
from there. So I think
23:26
in the future, V0 will evolve to
23:28
be more of your general purpose.
23:30
AI assistant that you can use for
23:32
many different things or is trying
23:34
to incrementally get there as we tackle
23:36
one problem and hopefully do it
23:38
well before kind of moving on and
23:40
expanding scope. So if that's
23:43
the case, how is it
23:45
different from something like cursor or
23:47
co -pilot or something like that?
23:50
Yeah, totally. I think of
23:52
cursor and Windsurf and
23:54
some of the AI enabled
23:56
IDEs as tools for
23:58
I'll call them professional programmers
24:00
or professional coders who
24:02
are primarily in their IDE,
24:04
in their editor all
24:06
day. They're
24:08
fantastic at that. And I use
24:10
Zed personally, which also has
24:12
AI tools, but they're all incredibly
24:14
good at this. Versus I
24:16
think the market for V0 and
24:18
other tools doing more of the
24:20
generative UI space is more so
24:22
I think for the kind of
24:25
adjacent folks who are now getting into
24:27
development. Certainly, obviously I'm a professional programmer
24:29
and I still use V zero, but
24:31
I've seen a lot of people either
24:33
learning to build products through V zero,
24:36
or maybe they had an idea and
24:38
they weren't really sure how to build
24:40
it yet. All using V zero as
24:42
a starting point to kind of kick
24:44
off that knowledge where it's more about
24:46
the. design and the ideas and
24:48
the product experience and it is just
24:50
the code itself. And of course, you
24:52
can still view the code. It's still
24:54
important that you can observe the code
24:56
and modify the code. But first and
24:58
foremost is the actual thing that you're
25:00
building, which is kind of just a
25:02
different model than the professional programmers, the
25:05
IDEs, just a different, different
25:08
model. So I'll
25:10
challenge you a bit about this.
25:12
If you don't mind, I've
25:14
been told that I sometimes ask
25:16
tough questions and. Hopefully, you're
25:18
fine with it. He's mean
25:20
to us too, it's okay. If
25:23
that's the case, if
25:25
that's your target audience, the
25:27
way that you're presenting
25:29
it, isn't React potentially too
25:31
low of a level
25:33
of abstraction? Because
25:38
if I'm more of
25:40
a design person, You
25:42
know, it's great that I can
25:44
give my code to a developer
25:46
later on, but if I'm playing
25:48
with it, maybe I would like
25:50
to play with, you know, at
25:53
the higher level of
25:56
abstraction with more sophisticated box,
25:58
but less complicated boxes. Yeah.
26:02
So while we do use react,
26:04
the default for V zero
26:06
generally is going to be an
26:08
XGS application. So. It is
26:10
a higher level abstraction on the
26:12
underlying React primitives. Generally
26:14
though, the reason why I think React
26:16
is a good fit for this type
26:18
of model is when people are getting
26:20
started with building their first application and
26:22
they don't have a lot of experience,
26:25
the mental model of components actually makes
26:27
a lot of sense to people,
26:29
more so than. a vanilla HTML or
26:31
vanilla JavaScript file. If we go
26:33
back to jQuery days and you have
26:35
a thousand line jQuery file, this
26:37
is totally fine and it totally works.
26:39
But looking at specific query selectors,
26:42
they're looking up an element by ID
26:44
and then swapping out the data
26:46
from there, the styling from there. Is
26:48
it as intuitive for the first
26:50
time user, I think, as looking at
26:52
some React code? Maybe not at
26:54
first, but I think you pick it
26:56
up a little bit more conceptually.
26:58
as you kind of get into the
27:00
code a bit. At
27:02
least that's been my experience talking with folks who are
27:04
kind of learning or looking at React code for
27:06
the first time. And secondly, the
27:08
nice thing about the component model and
27:11
composition of React is an LLM generates
27:13
a bunch of stuff. You know,
27:15
maybe there's 10 different files, 15 different files.
27:17
You want to be able to place those
27:19
into different, you know, spots in your application
27:21
without there being these global side effects. And
27:23
that's one really nice thing that the React
27:25
model does pretty well. Now, I will say
27:27
if you are a little bit more, maybe
27:30
not super beginner, but maybe intermediate and you
27:32
kind of know a little bit, you know
27:34
that you want to just work with HTML
27:36
and no JavaScript, you can explicitly prompt VZero
27:38
to say like, Hey, I know what I'm
27:40
doing here. Like give me more of that
27:42
low level primitive and I can kind of
27:44
build from there. Cool. I
27:47
want to jump in here. Just one more thing,
27:49
Chuck. Oh, go ahead. Will
27:51
it be able to also generate,
27:53
let's say stuff for Svelte? Yeah,
27:56
I think right now it can
27:58
answer Svelte questions and it can
28:00
help you build Svelte applications in
28:03
terms of the completely dynamic running
28:05
of Svelte applications in the browser.
28:07
We don't have support for that
28:09
yet, but I would like to
28:11
get there in the future for
28:13
sure. It kind of goes back
28:15
to that quality of the unbounded
28:17
space of next token prediction is
28:19
already tough enough. So we're trying
28:21
to slowly build a system that
28:23
helps us get. predictable,
28:26
repeatable success. And then we can
28:28
consider expanding up further. Yeah.
28:30
Just related to that. Um, so
28:32
I told it what my error was
28:34
in here and it fixed, it
28:36
just gave me a global CSS, but
28:38
then the next thing I did
28:40
is I said, can you give this
28:42
to me as a Nuxt app
28:44
instead of a next app? And
28:46
of course, then what it did
28:49
is it gave me a Nuxt app
28:51
with the source folder for the
28:53
next step still in it. You
28:55
know, but you know, and so you,
28:57
you know, you, you figured this stuff
28:59
out with your prompt engineering and things,
29:01
but, um, I think it's just fascinating
29:03
how far you can get. And I've
29:05
done this with other projects with AI
29:07
where I've essentially said, okay, give me
29:09
an app, you know, this kind of
29:11
an app, you know, with chat, GPT
29:13
or some of the other ones, give
29:15
me this kind of an app with
29:17
these kinds of features and these kinds
29:19
of, you know, deals and I'm using
29:21
tailwind and, and, and it'll do it.
29:24
And then if I go and run
29:26
it. I can come back and tell
29:28
it I got this error and it
29:30
will generally fix it within one go.
29:32
Sometimes it just it can't quite figure
29:34
it out and that's you know that
29:36
makes me feel good because I'm never
29:38
gonna not be able to get a
29:40
job as a programmer but um yeah
29:42
it's it's pretty interesting to see how
29:44
far you can get on this stuff
29:46
and uh yeah um how maintainable is
29:48
the code that that it generates? how
29:52
maintainable is the code? This
29:55
is one of the biggest advantages
29:57
in my opinion of building on foundations
29:59
like existing popular open source frameworks
30:01
is that you're ejecting the code out
30:03
into a system that is an
30:05
open standard. It can be deployed to
30:07
any infrastructure in the world, whether
30:09
it's a S3 bucket or deployed to
30:11
something like Brazil. And there's this
30:13
thriving community of developers who are. you
30:15
know, consistently with every release, making
30:17
the docs a little bit better, making
30:19
the examples a little bit better,
30:21
building all these templates in the community,
30:23
you know, fixing bugs, making improvements.
30:25
And it has the backing of a,
30:28
you know, a decently sized staff
30:30
team in the next year's team who
30:32
is helping make it better every
30:34
day. Yep. I
30:36
did run into a rate limit. I asked
30:38
it too many questions. And then it said, you
30:40
have to sign up for yourself to keep
30:42
using it. Yeah, yeah, sign up. No, $20 a
30:44
month more. No. Yeah,
30:46
no, I've used for selling
30:48
the past and I like it
30:50
nice things going on there
30:52
I Saw relatively new capability in
30:55
it. I guess I'm guessing
30:57
that you can select a component
30:59
and ask Ask it to
31:01
modify just that component. Yeah, correct.
31:03
That's a relatively new feature.
31:05
I think Mm -hmm I Think
31:07
the next step would be to
31:09
add some sort of drag
31:11
and drop capabilities or something Yeah,
31:14
we'd love to find
31:16
a good way to
31:18
bridge the gap of
31:20
a little more visual
31:22
editing, tooling that helps
31:24
you think about the UI from
31:26
a higher level of abstraction versus
31:28
just prompting. Of course, you can
31:31
get very far with prompting, but
31:33
sometimes you like the tools that
31:35
things like a Figma might have.
31:37
There's some pretty helpful stuff in there. Yeah.
31:41
Okay, so anything else you want to
31:43
say about V0 before moving on to
31:45
the next one? Yeah, I've
31:48
got a whole bunch of tools. Yeah,
31:50
I just wanted to jump in
31:52
on V0. So it's targeted at people
31:55
who maybe don't have a ton
31:57
of coding experience or, you
31:59
know, I could see this, you know, it
32:01
gave me a nice enough layout on
32:03
the thing that I clicked on that. I
32:05
could see this as just kind of
32:07
a, hey, this is a kickoff point for
32:09
me and I'm competent at this stuff. Are
32:12
you seeing this as something
32:15
that could conceivably become a proper
32:17
dev tool or are you
32:19
looking to go in that direction
32:21
with something else or where
32:23
do you land on that? Yeah,
32:26
at least for me and my experience, I
32:28
already consider it a proper dev tool. It's
32:30
something that I use to build a lot
32:32
of applications that. or ideas that I have
32:34
that maybe I want to almost run in
32:36
parallel. You can try out a couple of
32:38
different ideas and like, okay, I like this
32:41
version better. I'm going to prompt this one
32:43
and go a little bit further. I recently
32:45
wrote a blog post about this called Personal
32:47
Software. I think other people
32:49
have called it Vibe Coding, which I
32:51
like the name of. Basically,
32:54
it's not necessarily saying this is
32:56
production software used by the government,
32:58
but there's lots of software and
33:00
applications to be built. it
33:03
can be very fun to do it in that
33:05
way. So I think we're seeing a lot of
33:07
adoption for those type of use cases. The
33:10
only other thing I'll mention on v0 is
33:12
if you're really curious about how it's built, everything
33:14
that powers v0, we have
33:16
open sourced a chat GPT like
33:19
application doing very similar things.
33:21
So it's kind of build your
33:23
own chat GPT chat bot
33:25
that can generate code and run
33:27
code in the browser. It
33:29
can generate spreadsheets and create images
33:31
and you know, have the
33:33
canvas or artifacts like features that
33:35
Claude and ChatGPG have. And
33:37
that's part of our AI SDK,
33:39
which is another framework that
33:41
we maintain. It's like lower level
33:43
primitives to help you build AI applications. So
33:46
chat .vicell .ai is that template if you want
33:48
to try it out. And it's all open source.
33:50
So you can fork it and build your
33:52
own. Awesome. I
33:54
just want to mention that my connection
33:56
has become a bit spotty all of
33:58
a sudden. So if I Drop
34:01
off or anything that's hopefully
34:03
that won't happen, but you
34:05
know No worries if it
34:07
happens then people will miss
34:09
you because I'll ask all
34:11
my dumb questions So anything
34:13
else about B zero No,
34:15
I think that's that's the
34:17
majority of it If you
34:19
haven't given a shot, please
34:21
do and I'd love to
34:23
hear feedback for anybody listening
34:25
Feel free to shoot me
34:27
a message Nice.
34:31
So of course, I think Vercel
34:33
is best known for Next .js,
34:35
I think. Although it's
34:38
important to note that Vercel and Next
34:40
.js are not the same thing. So
34:44
maybe that's actually something
34:46
worth talking about, like the
34:48
relationship between Vercel and
34:50
Next .js. Yeah. Yeah.
34:53
So going back to 2016,
34:55
kind of a ways away.
34:59
When Vercel was getting started, Guillermo
35:02
was trying to build a really
35:04
high quality front end, a really high
35:06
quality application. And at the time
35:08
wasn't very satisfied with the state of
35:10
tooling and like, you know, every
35:12
engineer, well, rather than building my product,
35:14
I'm going to build the tooling
35:16
for the product instead. I spent a
35:19
decent chunk of time building out
35:21
what was the first version of Next
35:23
.js back in 2016. And the
35:25
idea at the time was, putting together
35:27
a React application is kind of hard. And
35:30
he wanted to build a server
35:32
-first or a server -side rendered React
35:34
framework that would make that easy
35:36
to create, you know, application UI,
35:39
marketing pages, dashboards, e -commerce sites, and so
35:41
on. And this was really about the
35:43
same time that Create React app came
35:45
out. So only a few months apart.
35:48
And they've both kind of served unique
35:50
purposes over the years. Since
35:52
then, Next .js has grown to
35:54
add a lot more functionality. The community
35:56
has grown to be a react
35:58
framework that can build pretty much any
36:00
type of application. If you want
36:02
to build an interactive dashboard, like some
36:04
kind of single page app, that's
36:06
totally doable. If you want to build,
36:08
you know, SEO optimized marketing pages
36:10
or, you know, logged out e -commerce
36:12
experiences, that's also possible. If you want
36:15
to build new AI chatbots where
36:17
you can stream in responses from the
36:19
server, that's another prominent use case
36:21
of Next .js. So it all kind
36:23
of sits on the foundation of React
36:25
as the UI rendering layer, the
36:27
UI engine for your components. And
36:29
then we add some niceties in
36:31
the middle to simplify working with images
36:34
and optimizing fonts and optimizing third
36:36
party scripts and streaming content from the
36:38
server and building out your API
36:40
layer and so on and so forth.
36:43
So going back to your original
36:45
question on the relationship over the
36:47
years, we've just continuously been investing
36:49
in this MIT licensed open source
36:51
framework. to a point now where
36:54
we have 1 .3 million monthly active
36:56
developers on Next .js. And
36:58
I think we're at 8
37:00
.5 million weekly downloads on
37:02
MPM. So really the trajectory
37:04
has been great for folks
37:06
picking up and adopting and
37:08
putting Next .js in production for
37:10
both, you know, personal sites,
37:12
small startups and really large
37:14
enterprises. For sell,
37:17
Well, I would just like
37:19
to interject there, an
37:21
interesting statistic that
37:23
I saw. So
37:26
there's Google Crocs database.
37:28
Are you familiar with
37:30
it? Yes. So they
37:32
mostly it's used for
37:34
looking at the performance
37:36
of production websites that
37:38
Google scans. But
37:40
it also analyzes the technologies with
37:42
which those sites are constructed. So
37:44
you can actually ask it, show
37:47
me the performance of all the
37:49
React sites or show me the overall
37:51
performance of all the view sites
37:53
and so forth. But you can
37:55
also just basically ask it a
37:57
simple question like, How many sites do
38:00
you see that use React? How
38:02
many sites do you see that use
38:04
Next? And it seems
38:06
to me, looking at the numbers, it
38:08
seems to me that
38:10
if you're building a new
38:13
website using React, it's
38:15
probably being built with
38:17
Next. Because
38:20
looking at this,
38:22
again, the statistics
38:24
from 2022 up
38:26
till now, the
38:30
numbers seem to indicate
38:32
that, that at that period
38:34
of time, React websites
38:36
that crux analyzes have grown
38:38
by something like 10%,
38:41
but the Next .js websites
38:43
have tripled or something along
38:45
these lines. So effectively
38:47
it means that if you're
38:49
building a React website, then
38:52
it's probably being built with
38:54
Next .js. Does that
38:56
kind of match what you're saying? So
38:59
the last time I checked
39:01
out the HTTP archive data,
39:04
which backs the crux report
39:06
was in December of last
39:08
year of 2024. And
39:10
at the time there were 28 ,000
39:12
of the top million sites that
39:14
were using Next .js, which is pretty
39:16
good. I'm pretty happy with that. And
39:18
of those 28 ,050 % of them
39:21
were using the new, the new routing
39:23
system inside of Next .js called the
39:25
app router. which under the hood
39:27
uses some of these new react features
39:29
like react server components and server
39:31
actions and some of the lower level
39:33
things like streaming that overall help
39:35
get better core web vitals. So it's
39:37
been awesome to see on two
39:40
fronts. One, just the general adoption of
39:42
folks building react sites. Two, those
39:44
who are choosing to pick Next .js
39:46
as their react framework of choice. But
39:48
then three, those who are now
39:50
starting to build with the app router
39:52
or incrementally migrating over to the
39:54
app router are generally seeing better core
39:56
web vitals across pretty much all
39:59
dimensions versus kind of the previous model
40:01
of Next .js that still exists, but
40:03
we're kind of investing in this
40:05
on ramp over time for people to
40:07
take better advantage of the server
40:09
for some things and better advantage of
40:11
the client for other things. So
40:13
HTTP archive actually identifies websites that use
40:16
the app, the new app router.
40:18
Yeah. Yeah, by looking at the different
40:20
tags in the body. Oh,
40:22
that's cool. I didn't know that. I knew
40:24
that it identifies next. Yes, I didn't know
40:26
that it actually is able to distinguish between
40:28
the page router and the app router. That's
40:31
really cool. But
40:33
conversely, what I'm
40:36
also saying, so
40:38
about well, over
40:40
a half a year ago, I
40:43
switched jobs and while I was, you
40:45
know, hunting for a new position, I
40:48
was looking at, I was speaking with
40:50
various companies and also because of the
40:52
type of position that I was looking
40:54
for, I was also kind of looking
40:56
at the technology stacks that they were
40:58
using and whatnot. And
41:00
what I saw
41:02
was that, again,
41:04
it's anecdotal, but
41:06
it was still...
41:08
fairly consistent, let's
41:10
call it. It
41:12
seemed that indeed if you're building
41:15
a website using React, and
41:17
by website I mean
41:19
something that Google would
41:22
likely rank or scan,
41:25
then it's highly likely that
41:27
you're using Next .js. But
41:29
a lot of organizations
41:31
that are building web apps,
41:34
There are a lot of
41:36
times still predominantly client side
41:38
only in terms of the
41:40
react usage and then they're
41:43
kind of not using react
41:45
or any other kind of
41:47
full stack framework. And
41:49
they're like mostly just doing
41:51
react on the client side.
41:54
and using not necessarily even
41:56
JavaScript on the backend.
41:58
They might be talking directly
42:00
to, you know, restful
42:02
endpoints implemented in whatever. So
42:06
are you thinking about somehow
42:09
addressing that market as well? Yeah,
42:11
totally. I think our story
42:14
today is maybe under discuss, but
42:16
we have a pretty strong
42:18
story for building single page applications.
42:20
in Next .js or kind of
42:22
fully client applications. Most
42:24
of the time, obviously, I talk a
42:26
lot about the server stuff because
42:28
I'm particularly excited about making it easier
42:31
for developers to use features on
42:33
the server. But for a lot of
42:35
apps, especially, I don't know, an
42:37
internal dashboard at some very large company
42:39
where they've already got an established, you
42:42
know, it's probably not even a
42:44
node back in. Maybe it's a go back
42:46
in or Java or something else. They've got
42:48
a RESTful API somewhere. and
42:50
their info team hands them down this
42:52
opportunity. You can deploy a site
42:54
to S3. You can't run a server,
42:56
but you can put some static
42:58
files here. In doing that,
43:00
their only option basically is to do
43:03
a client -only application. We
43:05
want to make Next .js a great opportunity
43:07
for those folks who are wanting to use
43:09
React as well. We have inside
43:11
of Next .js this thing called a
43:13
static export basically where you can tell
43:15
the app, basically strip away the Node .js
43:17
server. I just want to generate a
43:19
bundle of HTML, JavaScript, CSS files, and
43:21
then I can drop them in an
43:23
S3 bucket and just kind of be
43:26
on with my day. And, you know,
43:28
even further, if I want to basically
43:30
opt into 100 % rendering in the
43:32
browser, 100 % Clyde side rendering, we have
43:34
a way you can basically skip server
43:36
side rendering entirely, which for some teams,
43:38
they're like, Yeah, like I know it's
43:40
maybe not the best UX. I
43:43
know that maybe it's a little
43:45
bit slower, but it's fine. Like I'm
43:47
just building something that has a
43:49
bunch of heavy charting libraries and I
43:51
just want to go straight to
43:53
client only and that can be okay
43:55
too. So you're basically saying that
43:57
you can SSG static start generation into
43:59
an S3 bucket and you can
44:01
take it a step further and basically
44:03
SSG a blank page into an
44:05
S3 bucket. Totally. Yeah,
44:07
the way I think about it is There
44:10
is a strict spa, which
44:12
I'll define, you know, strict
44:14
single page application, which is
44:16
basically that blank HTML shell
44:18
that boots up some client
44:20
JS. There's a ton of
44:22
websites built this way, right? Like a lot
44:25
of the client react only apps are just
44:27
built this way. And that's fine. You can
44:29
do that in next JS. What we tried
44:31
to do is build this gradual rail. So
44:33
you can do that. You can also. pre
44:36
statically generator, pre render more of
44:38
that page. So it's not necessarily a
44:40
blank HTML page, but like a
44:42
good chunk of the shell that then
44:44
boots up react on the client. Taking
44:47
another step further, you can pre render
44:49
multiple inputs. So maybe you have slash
44:51
the slash route. Maybe you have slash
44:53
dashboard. Maybe you have slash users. Each
44:55
one of them has a different shell.
44:57
So you're not looking at the like
44:59
the page loads. You see one spinner.
45:01
while you're waiting for the client JS
45:03
to boot up and then you've kind
45:05
of loaded your client app. You can
45:07
still get a little bit better UX
45:09
by loading those shells and having different
45:11
shells. And that's like three steps along
45:13
this journey, but there's actually five more
45:15
steps if you want to take it.
45:18
Like if you decide, actually I am
45:20
okay with doing a little bit of
45:22
stuff on the server, you can even
45:24
run a, what's sometimes called a back
45:26
-end for front -end, but you can use
45:28
the next JS server just to effectively.
45:30
have your secrets to talk to your
45:32
API, have your bear token, and still
45:35
have your REST API somewhere else. So
45:37
you don't have to do some wild
45:39
proxy situation. It's basically just a proxy. Interesting.
45:43
It's especially interesting given that the
45:45
React core team is effectively
45:47
retired to create React app. Yeah,
45:51
I think from their perspective, which makes a
45:53
lot of sense, going back in time when
45:55
Create React app came out, Like I mentioned,
45:57
it was basically the exact same time as
45:59
Next .js. And the state of building a
46:01
React app was way more difficult than it
46:03
is today. Today, there's this vibrant ecosystem of
46:05
frameworks and even lower level tools like bundlers
46:08
who are building deeper integration into all of
46:10
the bits of React. And I think for
46:12
devs getting started, it's just dramatically easier to
46:14
be productive right away. But to have a
46:16
great UX, and it's kind of why I
46:18
mentioned there's like this graduation curve. Yes, you
46:20
can start here in this. the strict spa
46:23
mode but like along the way you're probably
46:25
going to want some of these things like
46:27
you probably want to have that little bit
46:29
better you actually don't have to stare at
46:31
that loading spinner while the user is loading
46:33
the page and you know it'd be kind
46:35
of great if I could prefetch some of
46:37
that stuff on the server ahead of time
46:40
rather than having to wait for the client
46:42
and like offloading all that work for the
46:44
client. So it's kind of like we know
46:46
based on, you know, let's call it 10
46:48
years of React, you're probably going to run
46:50
into like these three or four things and
46:52
how you do routing, how you do
46:54
data fetching, how you load your assets. So
46:57
that's kind of why Next .js gives you all
46:59
the building blocks for that, but then it's
47:01
up to you and you get the choice as
47:03
the developer on how much or how little
47:06
you want to use. If you want to use
47:08
Next .js just as easy mode react that's all
47:10
client rendered, like that's fine too. It's
47:12
interesting or amusing that it's
47:15
basically you guys and then everybody
47:17
else using VIT. Yeah.
47:21
Yeah. I think VIT has been a great tool
47:23
for those people who want to kind of build
47:25
their own type of system. So maybe they want
47:27
to do, um, you know, they pick
47:29
their own bundler in a VIT, they pick
47:31
their own router, maybe a react router. Uh,
47:33
they picked their own other pieces of the
47:35
puzzle, a data fetching solution. Maybe they're using
47:37
something like a react query. for example, and
47:39
they can kind of put those pieces together
47:41
based on whatever the needs are for their
47:43
app. That's totally fine. I still see a
47:45
ton of people doing that. What
47:48
we've tried to do with Next
47:50
.js is provide another option, which
47:52
is like, if you want to
47:54
have all of that baked into
47:56
the framework itself, that's one solution
47:58
that we offer. But yeah, it's
48:00
totally fine. I think it's a
48:03
let a thousand flowers bloom type
48:05
situation where it's not a bad
48:07
thing to have a lot of
48:09
options in the ecosystem. I think
48:11
this is primarily why React is
48:13
still so popular 10 years later.
48:15
But is it really that way?
48:17
Because it does seem that a
48:20
lot of the React core team
48:22
has moved from meta into Versailles
48:24
to the extent that it almost
48:26
seems like the next version of
48:28
React is Next .js. So
48:30
is it a kind of world
48:33
where Next .js is kind of eating
48:35
React? That's
48:37
not really how I see it, because
48:39
if you look at the percentage of
48:41
the people who work and contribute to
48:43
React that work at Vercel, it's still
48:45
pretty small in comparison to the broader
48:47
scope of the React team. I think
48:49
a lot of people don't know the
48:51
extent of how big the React team
48:53
is at Meta. They do a lot
48:56
of fantastic work, not only for the
48:58
React core team itself, but for React
49:00
Native and for supporting all the React
49:02
development for other kind of. pieces
49:04
of the ecosystem at Meta. Vercel
49:06
certainly is helping push React forward
49:09
in so far as we're contributing
49:11
some of the server features, things
49:13
that we're helping with server actions. For example,
49:15
we've played a decent chunk in contributing a lot
49:17
of that work. But
49:20
ultimately at the end of the day, it's
49:22
still a partnership between Meta and us and
49:24
hopefully more companies in the future as well.
49:26
I think from the React team's perspective, they
49:29
would love to have more kind of full
49:31
-time contributors to help build this out. But
49:33
it's a pretty big investment in terms of
49:35
your company's R &D spend. And it's something that,
49:37
you know, Vercel is kind of motivated to
49:39
do because we also are trying to make
49:41
Next .js a great framework that built on
49:43
top of React. So we're willing to invest
49:45
the capital in terms of funding people on
49:48
our team to help contribute to the core
49:50
library. So yeah, that's the way I think
49:52
about it. I think it's great that we
49:54
help contribute to React. And I would love
49:56
to see other companies. join
49:58
as well. Cool,
50:00
I like that approach. So
50:03
are there are there new innovations
50:05
in Next .js that you want to
50:07
tell us about or things that are
50:09
coming up that people might want
50:11
to hear about? Yeah, I
50:13
think for me the whole
50:15
journey of 2025 is basically
50:18
two things. The
50:20
first one is continuing
50:22
to make 1 % iterations
50:24
every release every day.
50:26
on the foundations, the
50:28
fundamentals, really great
50:30
stack traces, really great error
50:33
overlays that are beautiful and they help you
50:35
find the answer quickly. Really
50:37
great performance when you're working
50:39
locally, fast compilation, fast hot
50:41
module reload, fast builds. A
50:44
lot of the things that are
50:46
the bread and butter of the framework
50:48
per se, maybe not the most
50:50
sexy features in the world, but we
50:52
think that performance and stability is
50:54
a really important piece of what people
50:56
who have signed up to use
50:58
Next .js and their company appreciate in
51:00
terms of frequent updates and improvements. So
51:02
that's a heavy, heavy focus for
51:04
us. Then the next release of Next
51:06
.js here in a few weeks, probably,
51:09
or maybe shorter, we
51:11
have a new redesigned air overlay
51:13
that looks so great. It's got better
51:15
stack traces. It's easier to find
51:17
the information. It's got some nice animations,
51:19
like just the little delights of
51:21
polish that make the everyday experience of.
51:23
working in the framework better. So
51:26
there's a ton of stuff in that category. And
51:28
then the second category is how do
51:30
we take, you know, years of feedback
51:32
on the framework and kind of continuously
51:34
innovate and simplify the existing model along
51:36
the way, while still respecting that, you
51:38
know, a lot of people are building
51:40
their apps in a certain way and
51:42
we don't want to disrupt their workflows,
51:44
but we want to give them a
51:47
solution over time that's increasingly more simple
51:49
for them to adopt, especially for new
51:51
people who are just getting started in
51:53
the framework. So an example
51:55
of this is in two ways, one
51:57
around caching and two around the story
51:59
with data fetching. So probably the biggest
52:01
piece of feedback on next year's for
52:03
the past couple of years is very
52:05
powerful that it has caching, but would
52:07
love to see the default experience be
52:09
a little bit easier. It's like, it
52:11
felt like the ramp from, okay, I
52:13
can do caching to like, oh yeah,
52:15
this is why caching is really hard
52:18
was like happening too quickly. So we've
52:20
been working on an improved system that
52:22
allows you to define. directives
52:24
like a string like use cash and
52:26
cash a function or cash a
52:28
react component in a much more granular
52:30
composable way. That's going to be
52:32
coming here very soon. Then the next
52:35
piece that builds on top of
52:37
that is the ability to write code
52:39
that looks asynchronous and have it
52:41
actually be a page that's dynamically rendered.
52:43
So I'll give you an example. You
52:46
have a react server component. You might
52:48
think, okay, that server component requires a
52:50
server. The first misconception there is you
52:52
can take a server component and you
52:54
can do that static export I mentioned
52:57
and run it as a pre -rendered
52:59
component that just spits out some HTML,
53:01
right? So it's kind of like a
53:03
build time. Yeah. It's like a react
53:05
build time component. Just it has the
53:07
access to the server. So you've got
53:09
the server component. You say async function,
53:11
await, get data from database. If
53:13
you write a call like await, get data
53:16
from database, you're probably expecting that to
53:18
be fresh data. Like this is you're making
53:20
a call and you didn't say that
53:22
you wanted anything to be cashed, right? But
53:24
the inherent confusion here comes with
53:26
this idea of pre -rendering. If it's
53:28
running during the build, you're trying to
53:30
pre -render as much of the page
53:32
as possible. And that's really good
53:34
because you can get that fast initial
53:36
response when you have the pre -render.
53:38
So the two kind of pieces
53:40
in parallel in this next phase are,
53:42
if I write code that looks
53:44
dynamic, the page should default to
53:46
being dynamic. And then me as a developer,
53:48
I get the opportunity to say, which way
53:51
do I want to go? Should this be
53:53
fresh on every request? OK,
53:55
great. I'd put suspense around it, react
53:57
suspense. If it's not, and
53:59
it's something I want to pre -render, I'd put
54:01
use cash on it. So you have these
54:03
two paths of which way you want to take.
54:06
And those three things, the
54:09
first two and then the third one
54:11
being partial pre -rendering, which is effectively just
54:13
a way to get a larger amount of
54:15
the page pre -rendered. I
54:17
think we'll simplify the experience of making
54:19
Next .js apps quite a bit. So I'm
54:21
looking forward to that future as well. It's
54:24
interesting. Well, of course, you know that
54:26
they say that there are two hard
54:28
problems in computer science. Naming
54:30
things, caching, validation, and off by
54:33
one errors. So
54:35
caching is hard.
54:38
The need for caching by
54:41
the way, I mean it's
54:43
worthwhile to touch a little
54:45
bit about why all this
54:47
discussion about caching I think
54:49
it has to do with
54:51
the fact that we're in
54:53
encapsulating data access and that
54:56
can really easily lead to
54:58
waterfalls and And so one
55:00
way is to basically obviously
55:02
just break encapsulation and get
55:04
all the data as
55:07
efficiently as possible yourself and stream
55:09
it into the components, but then
55:11
the components aren't self -contained. So
55:13
the alternative is to say, I
55:15
will get the data at the component
55:17
level, but I'm really smart about
55:19
caching to avoid getting the same data
55:22
multiple times and making all the
55:24
data retrieval as efficient as possible. And
55:26
that's the route that you guys
55:28
seem to be taking. It
55:30
will be very interesting
55:32
to see how efficient we,
55:34
how... how we are
55:36
able to achieve this combination
55:38
of efficiency, ease
55:40
of use, and correctness. It's
55:42
not going to be a
55:45
trivial combination. Yeah,
55:47
just to your point on like why caching,
55:51
the cache exists somewhere in the
55:53
system. If it's not in your
55:55
front end, then you're caching your
55:57
API responses, or you have some
55:59
like a in -memory cache in front
56:01
of your database. Like most of
56:03
the time, there's a cache somewhere
56:05
in the stack. And I
56:07
think what we're seeing with a Next .js
56:09
app router app and being more involved
56:11
across the full stack, basically react having
56:13
more of an opinion about the network
56:15
layer, is that those caches are moving
56:18
into more user code that a developer,
56:20
a product developer is having to think
56:22
about. which is very powerful, but it's
56:24
maybe not something they had to consider
56:26
previously. Because like, well, that was some
56:28
other team who worked on the REST
56:30
API. They added the caching layer. I
56:32
just hit that endpoint as much as I want. I
56:35
don't really have to think about it. You could still
56:37
build that model in the year. Until smoke comes out
56:39
of the back of the computer. Right.
56:41
Right. Or until your cache expires and
56:43
you didn't realize that your cache control
56:45
headers were misconfigured and then actually you
56:47
overloaded your database and you didn't have
56:49
the proper steel with air caching headers.
56:51
So now your pages is down and
56:53
you have to, you know, do a
56:55
rollback. And there's a lot of hard
56:57
parts in getting that right for sure.
56:59
And building it into the system, like
57:02
you said, while still maintaining correctness is
57:04
tough. One of the things with use
57:06
cache, because we can use the compiler. which
57:08
shout out to Spelt, who does a
57:10
great job of using the compiler as well,
57:12
making that kind of a core part
57:14
of the design of both the language and
57:17
the framework with SpeltKit, is that we
57:19
can look at the inputs into the function,
57:22
the actual variables being used inside of
57:24
the code where you've marked it with
57:26
this directive and automatically figure out the
57:28
cache keys. So you don't, oh shoot,
57:30
I forgot to, you know, mark this
57:32
as a cash key for something that
57:34
actually changes. So I'm not able to
57:36
automatically invalidate that cash when it changes.
57:38
Like a lot of those really hard
57:40
things, we can automate some of that
57:42
away by using the compiler. It's obviously
57:44
not perfect. There are still cases where
57:46
you have to be aware of what
57:48
you're doing with the cash and tag
57:50
it correctly and validate it correctly. But
57:52
more and more of that, that we
57:54
can build into the framework itself, the
57:56
more little edge cases that. you don't
57:58
have to deal with. Yeah, at this
58:00
point, it's worthwhile mentioning that we actually
58:02
had Joe Savona and Satya from Metta
58:04
to talk about React Compiler, which is
58:06
all about demyelization, which is another way
58:08
of saying caching. So
58:10
for sure, I'm totally with
58:13
you on that. I
58:15
wanted to point out one other thing,
58:17
and that is that you're talking about kind
58:19
of figuring these things out as you
58:21
go. Um, I think
58:23
sometimes we kind of take for granted that
58:25
things are just going to work without
58:28
recognizing that. Yeah. Sometimes the first pass is
58:30
not great or at least not ideal.
58:32
And so just keep that in mind. Um,
58:34
when you give the feedback, right? It's
58:36
like, Hey, look, you've got some great tools
58:38
here and we're going to give you
58:40
better tools, you know, based on your feedback
58:42
and things like that. Yeah.
58:45
I think in an ideal world, right?
58:47
You would never ship. And you would
58:49
just cook away in private until you
58:51
had the perfect API. And
58:53
oh man, it was just the best thing in the
58:55
world. People are going to love this. I'm just going
58:57
to keep waiting until I get it just right. And
58:59
the reality is like it's a lot more than that. That
59:01
really sounds nice too. Yeah, it
59:04
sounds great. My dad likes to
59:06
say that the worst enemy of
59:08
good is better. Yes. The
59:11
enemy of done is perfection. Yes.
59:14
Yeah. And it's
59:16
true. I am amused though
59:18
that, you know, I'm still old
59:20
enough to remember that the
59:22
big selling point of JavaScript that
59:25
was that it wasn't compiled.
59:27
So it's really interesting to see
59:29
how much of modern web
59:31
development is actually based on bundlers
59:33
and compilers, what you can
59:35
think about bundlers as the modern
59:38
day linkers. The fact
59:40
that we are effectively compiling and linking the
59:42
JavaScript code to get it to work the
59:44
way that we needed to work. Mm
59:46
-hmm. Yeah, I love the ambition.
59:48
I think there's a lot of
59:50
push in the web development community,
59:52
especially those who've been programming for
59:54
a while. Like, can we get
59:56
back to the roots of a
59:58
compiler -free, bundler -free experience? And
1:00:01
I love the ambition of that.
1:00:04
The HH. Yeah, yeah. I
1:00:06
think there's good ideas there. I
1:00:08
like the pragmatism of it. I think
1:00:10
the reality is, where do you
1:00:12
want to make the trade off? And
1:00:14
unfortunately, the trade -off with that approach
1:00:16
usually comes at the user experience. And
1:00:18
that's like a trade -off that I'm not
1:00:20
usually willing to make because I'd rather
1:00:22
have a compiler automate a lot of that
1:00:24
work for me. But
1:00:27
you have to revisit your
1:00:29
priors and recheck it year -over -year
1:00:31
as technology continues to get
1:00:33
better and better. So I agree
1:00:35
spiritually with some of what
1:00:37
he's saying, but I disagree on
1:00:39
the effectiveness of compilers and
1:00:41
bundlers in 2025. Yeah, I
1:00:43
have to say, you know, just kind of
1:00:45
coming from the other side of this to maybe
1:00:47
a stronger degree. There's a
1:00:49
lot of it depends in there, right?
1:00:51
If you're doing some things, you
1:00:54
know, maybe you don't need it. And then
1:00:56
in other cases, it makes a lot of
1:00:58
sense. And so you have to look at
1:01:00
it. You have to look at your approach.
1:01:02
You have to look at what you're using,
1:01:04
what your tools are, you know, what, what
1:01:06
your framework expects of you. And then, yeah,
1:01:08
I don't know that there's necessarily a right
1:01:10
way or a wrong way. A
1:01:12
lot of times it's just, you know, this makes a
1:01:15
lot more sense here and that makes a lot more sense
1:01:17
there. Yeah, it's about the
1:01:19
trade -offs really. I mean, you know,
1:01:21
if you can look at the extreme
1:01:23
other direction as expressed by somebody
1:01:25
like Alex Russell. So
1:01:27
for sure, you know, it really
1:01:29
depends on what you're trying to
1:01:31
build, the functionality that you're trying
1:01:33
to ship and whatnot. I mean,
1:01:35
obviously the fastest website is
1:01:37
going to be the blank page.
1:01:39
So, you know, it really depends on
1:01:41
what you're trying to achieve. By
1:01:47
the way, I saw another thing that
1:01:49
might be worth mentioning in the short time
1:01:51
that we have left that you guys,
1:01:53
I think, recently introduced, which is something called
1:01:55
Fluid. Can you say something about
1:01:57
that? Yeah. So
1:01:59
a quick kind of, well, I'll
1:02:01
try to make it quick. A
1:02:04
little history on Vercel's computing. Back
1:02:06
in the day, I'm sure you've probably
1:02:08
familiar, but back in the day,
1:02:10
we offered a server full platform where
1:02:12
you could deploy Docker containers. You
1:02:14
know, this was many, many years ago.
1:02:18
And in some instances, like that's really great.
1:02:20
You can have a lot of flexibility with
1:02:22
what you bring to the platform. It's, it's
1:02:24
unbounded. You can do anything as long as
1:02:26
you can get it into Docker. It's
1:02:28
harder though to get a
1:02:31
consistently good time because again,
1:02:33
it can be literally anything.
1:02:35
This is back in the day,
1:02:37
kind of pre -Vercel moving to a
1:02:40
serverless model. Along the way, we
1:02:42
kind of found our, found our
1:02:44
footing, I think, with focusing on
1:02:46
frameworks and focusing on being able
1:02:48
to automatically define and generate the
1:02:50
infrastructure as an output of the
1:02:52
framework. So we call this framework
1:02:54
defined infrastructure. Basically, it means you
1:02:56
take a Next .js or Nuxt or SvelteKit
1:02:58
or whatever framework you want to use, you
1:03:01
write code in the open source way.
1:03:03
you bring it to Vercel and Vercel
1:03:05
looks at the output of the build
1:03:07
process. So you run next build and
1:03:09
you get a bunch of files and
1:03:11
we do the work to actually convert
1:03:13
that into cloud infrastructure. We turn it
1:03:15
into distributing those files to a CDN
1:03:17
or running the on -demand compute through
1:03:19
Vercel functions and so on and so
1:03:21
forth. So for many years now we've
1:03:23
been doing serverless compute on top of
1:03:26
AWS Lambda. And that also
1:03:28
has a lot of benefits,
1:03:30
but also some notable downsides.
1:03:32
Um, I think the biggest
1:03:34
downsides that we've faced over
1:03:36
the years, number one, I
1:03:38
think serverless as a term
1:03:40
has became increasingly, uh, unhelpful
1:03:42
because it means everything to
1:03:44
everyone. And it's
1:03:46
just loud. Yeah. It is basically cloud,
1:03:49
right? It's like, okay. So what exactly
1:03:51
do you mean by that? Do you
1:03:53
mean auto scaling? Do you mean, uh,
1:03:55
cost effective? Like what, what, what are
1:03:57
the more specific things that you mean?
1:03:59
So. I have increasingly struggled to
1:04:01
explain what I'm talking about when I
1:04:03
say serverless. The second and like
1:04:06
the product feature that was the
1:04:08
most damning or hard with Lambda is
1:04:10
for every one request to my
1:04:12
Node .js Lambda, I can only use
1:04:14
one function. So it doesn't have the
1:04:16
multi concurrency of what you can
1:04:18
get out of a server. You can
1:04:20
send many requests into one box.
1:04:22
Generally that means it's going to be
1:04:24
a lot less efficient with your
1:04:26
actual infrastructure usage, you know, one request
1:04:28
to one function versus. load
1:04:30
balancing in many different requests into
1:04:32
a box, effectively a server, that
1:04:35
ultimately ends up saving customers
1:04:37
a lot of money when
1:04:39
you can be more efficient
1:04:41
with how you route the
1:04:43
compute. So over the
1:04:45
years, we had these kind of warts
1:04:47
of serverless that we were trying to fix
1:04:49
and paper over. We would add
1:04:51
new functionality to add streaming to
1:04:53
Excel functions. We would add
1:04:56
new functionality to prevent. you know,
1:04:58
unbounded recursions that could rack up
1:05:00
denial of wallets. We would add
1:05:02
functionality into run very long workloads
1:05:04
on top of functions. Um,
1:05:06
and then the latest things here is
1:05:08
actually optimizing the lower level bits. So
1:05:10
like the underlying infrastructure on top of
1:05:13
the function, we rewrote it in rust
1:05:15
to make it very optimized, which, you
1:05:17
know, of course, just rewrite in rust
1:05:19
and makes it faster, right? No,
1:05:21
but, um, we, we have
1:05:23
really been optimizing every part of the stack.
1:05:25
to where now the last piece was
1:05:27
making it much more concurrent. And
1:05:30
we've now taken all of this and
1:05:32
kind of bundled it into a new computing
1:05:34
model that we're calling fluid compute. It's
1:05:36
not necessarily something that is for cell only.
1:05:38
We think you could build your own
1:05:40
version of this on cloud primitives, if you
1:05:42
would prefer. But generally it
1:05:44
is trying to be a hybrid of
1:05:47
servers and serverless, building off of our
1:05:49
experience, what some other people in the
1:05:51
industry have done. a lot of inspiration
1:05:53
from things like Google Cloud Run, for
1:05:55
example, which is kind of coming at
1:05:57
this from the opposite side, from the
1:05:59
server side, and having auto scaling servers
1:06:02
with the general goal of being very
1:06:04
cost effective. So if I'm
1:06:06
doing a bunch of network IO to a
1:06:08
database or to an AI model, I
1:06:10
don't want have to spend a bunch of
1:06:13
extra money while I'm just waiting on all that
1:06:15
network IO, which for a lot of apps,
1:06:17
that's kind of the predominant. predominant usage
1:06:19
of your compute is just talking back and
1:06:21
forth. It's not the actual CPU time. So
1:06:24
very cost efficient, very
1:06:26
fast to start up and scale down.
1:06:28
So it autoscales based on your traffic,
1:06:30
which I think is really important. And
1:06:33
ideally, you can eliminate this
1:06:35
cold start problem that has plagued
1:06:38
serverless for a very long time. And
1:06:40
we do that by keeping a
1:06:42
pre -warmed functions, which is basically like
1:06:44
Google Cloud Run's minimum active instance is
1:06:46
one. It's like you always have.
1:06:48
Compute running that you can load balance
1:06:50
and send requests into so all
1:06:53
of that is fluid compute and it's
1:06:55
something that for cell customers can
1:06:57
opt into using today and we've seen
1:06:59
some pretty awesome results from it.
1:07:01
So it's kind of like lambda functions,
1:07:03
but they can live longer and
1:07:05
Service multiple requests and don't have to
1:07:08
immediately shut down when they're done.
1:07:10
Am I missing something? Yeah,
1:07:12
totally and If you're
1:07:14
doing network IO You're
1:07:16
being charged for the CPU, right?
1:07:19
You can be a lot more efficient
1:07:21
when you're waiting on a bunch of
1:07:23
calls to your AI model or to
1:07:25
your database, which ultimately drives down costs
1:07:27
for very concurrent applications that are doing
1:07:29
a lot of IO. So
1:07:31
in some instances, we've seen
1:07:33
people saving as much as 85
1:07:35
% of their bill previously on
1:07:37
the serverless model. It obviously
1:07:39
depends on your traffic patterns, how
1:07:41
concurrent your traffic actually is,
1:07:43
but In almost all cases, it's
1:07:45
better for the customer. You
1:07:48
said this isn't running on
1:07:50
Lambda or things like that.
1:07:53
What is it running on? Is this your own
1:07:55
platform? Yeah, so
1:07:57
we still use AWS, but
1:08:00
over the years, we've had to,
1:08:02
excuse me, over the years,
1:08:04
we've had to kind of
1:08:06
build our own custom pieces to
1:08:08
handle support before. Lambda
1:08:10
eventually had it and some instances
1:08:12
never had it. So for example,
1:08:15
excuse me, one sec. Got
1:08:20
a cough. For
1:08:22
example, Lambda didn't have
1:08:24
streaming until recently, I
1:08:26
think maybe like a
1:08:28
year ago. But when
1:08:30
we launched the next year's app
1:08:32
router, we wanted to have streaming and
1:08:34
had to build that kind of
1:08:37
independently on our system. We
1:08:39
still use AWS, still love AWS, but we
1:08:41
had to build a lot of that stuff
1:08:43
ourselves. Said
1:08:47
in another way, you can't go to AWS
1:08:49
and purchase fluid compute off the shelf. Right.
1:08:52
So you're using AWS
1:08:54
as kind of like
1:08:57
the hardware as it
1:08:59
were, but you're building
1:09:01
a lot of the
1:09:03
functionality on top of
1:09:05
that stuff that others
1:09:07
might try to
1:09:09
get from AWS, but necessarily
1:09:12
get from them, at least not yet. Yeah,
1:09:14
and we've seen this exact same thing
1:09:16
play out in our build infrastructure as well.
1:09:18
So yes, you can go around a
1:09:21
bunch of builds on EC2, but
1:09:23
we had done that for many, many years and
1:09:25
found there were a lot of little optimizations
1:09:27
we wanted to make to make it more efficient
1:09:29
and more cost effective. And
1:09:31
it's not like we set out with the
1:09:33
goal like, man, we really want to rebuild our
1:09:35
build infrastructure and do something custom. It
1:09:38
was more of a innovation was a
1:09:40
necessity here. So we have a technical
1:09:42
blog post that talks about this, but
1:09:44
our internal build system is called Hive
1:09:46
and it's effectively we outgrew EC2 based
1:09:48
on, you know, millions and millions and
1:09:50
millions of builds and ended up kind
1:09:52
of building our own system on top
1:09:54
of the underlying AWS primitives. Cool.
1:09:58
So AWS is still great
1:10:00
for like super reliable,
1:10:02
fast, efficient cloud infrastructure. We're
1:10:05
just trying to build, we're trying to
1:10:07
build the right abstractions for developers so
1:10:09
they can take the best advantage of
1:10:11
that infrastructure. Gotcha.
1:10:15
So for, for the stuff in
1:10:17
fluid, how much of it is
1:10:19
sort of transparent, you know, like
1:10:21
I just build the next app
1:10:23
and then I fluid it and
1:10:25
then it just works. And I
1:10:27
get those optimizations versus, okay,
1:10:29
you're going to deploy this to fluid. You've
1:10:31
got to use some of these APIs to
1:10:33
get some of the goodies. Yeah,
1:10:35
it's fully transparent. And this is
1:10:38
kind of the beauty of the framework
1:10:40
defined infrastructure model is you, you
1:10:42
wrote an XGS app or any framework
1:10:44
app and you had some kind
1:10:46
of API and it works on your
1:10:48
local machine. You've got your API.
1:10:50
Awesome. You deployed to Brazil. We understand
1:10:52
the output of the framework was
1:10:54
an API. we convert it
1:10:56
into a Excel function and you flip
1:10:58
on the switch that says, yes,
1:11:01
I would like to use fluid compute.
1:11:03
You now have more cost -effective, more
1:11:05
concurrent optimized infrastructure without having to
1:11:07
write the infrastructure code yourself. It's still
1:11:09
an output of the framework. So
1:11:12
what framework should you support then? Sorry,
1:11:14
say it again. What framework should
1:11:17
you support then for fluid? Um,
1:11:19
any framework that can run compute
1:11:21
on Vercel. So any server rendered
1:11:23
framework, any framework that allows APIs,
1:11:25
or even, you know, if you
1:11:27
want to do, for example, a,
1:11:29
uh, a client VIT application that
1:11:31
uses the Vercel functions API directory,
1:11:33
which is a way to add
1:11:35
server side code to just a
1:11:37
client on the app. Those functions
1:11:39
can also use fluid as well.
1:11:41
Uh, and also you can. you
1:11:43
know, maybe not as well known,
1:11:45
but you can deploy express apps
1:11:47
to Vercel or, you know, newer
1:11:49
frameworks like Hono, um, which
1:11:51
is like a backend, uh, express replacement. And
1:11:53
it also supports Python. So we run, uh,
1:11:55
you can run Python APIs on Vercel too. Very
1:11:59
cool. How
1:12:03
about Ruby on Rails? Yeah,
1:12:07
yeah. It would, um, it would be
1:12:09
interesting, not something that is. kind of
1:12:12
top of our list right now for Ruby
1:12:14
or for PHP, but there are community
1:12:16
libraries that do it. Um, so some people
1:12:18
have kind of hacked around it in
1:12:20
the community, which I think is pretty interesting.
1:12:22
So what else should people know? What's
1:12:24
going on at Versel? It's always interesting to
1:12:26
see what you guys are doing. Yeah.
1:12:28
So we talked about, uh, V zero, we
1:12:30
talked about the AI SDK. We talked
1:12:33
about next year, some of our improvements there.
1:12:35
Talked about Versel and fluid compute. Um,
1:12:37
we talked about the flags SDK. Um,
1:12:42
We talked a little bit about
1:12:44
ShadCN UI, which I think is
1:12:46
interesting. ShadCN works
1:12:48
at Purcell. So it's something that
1:12:51
we are always investing in
1:12:53
as well. And vzero natively understands
1:12:55
ShadCN components, as well as
1:12:57
kind of funnily enough, ChadGBT and
1:12:59
Claude also, like their UI
1:13:02
applications on the web also understand
1:13:04
ShadCN and can generate ShadCN
1:13:06
components, which is pretty cool. So
1:13:08
it's awesome to see that standardization there. Um,
1:13:13
I think that pretty much covers
1:13:15
everything. I mean, we can, we can
1:13:17
talk about other bits if you
1:13:19
would want, um, around for sales infrastructure
1:13:21
and, um, or other next just
1:13:23
features, but that we've, we've covered quite
1:13:25
a bit of service area. Yeah.
1:13:28
Well, we're kind of getting toward the end of our
1:13:30
time too. Um, if people
1:13:32
want to connect, uh, they have questions
1:13:34
about for sale or V zero or
1:13:36
anything else we've asked about. How
1:13:38
do they reach you or get help from other people
1:13:40
at Vercel? Yeah, if you want
1:13:42
to reach out to me, please send me
1:13:45
a message on X Twitter. If you have
1:13:47
feedback on any of our products, there should
1:13:49
be a feedback button in the UI on
1:13:51
every single one of our services. If you
1:13:53
want to just go directly there, that
1:13:55
all gets read and routed to the right
1:13:57
teams. So just know your feedback is heard. And
1:14:00
you can also email me Lee at
1:14:02
Vercel.com for those of you who still like
1:14:04
email. All
1:14:07
right Well, I'll tell you one
1:14:09
thing, you know, we predominantly use
1:14:11
slack and I keep losing things
1:14:13
on slack somebody sends a message
1:14:15
in one of the channels I'm
1:14:18
like subscribed to like a million
1:14:20
channels because of the type of
1:14:22
role that I perform at the
1:14:24
company Yeah, and if someone something
1:14:26
is sent on some channel and
1:14:28
I didn't catch it when it
1:14:31
was sent There's a good chance
1:14:33
that I will never catch it.
1:14:35
Hmm Yeah,
1:14:37
I've had to get very I've
1:14:39
had to become a very skilled slack
1:14:41
Sleuth I don't know the right
1:14:43
the right words, but I'm I'm very
1:14:45
good at navigating the mess of
1:14:48
slack channels now and staying Staying on
1:14:50
top of what I need to
1:14:52
be on top of because yeah as
1:14:54
a company grows There's just lots
1:14:56
of information happening in real time in
1:14:58
slack. Yep Right
1:15:01
right for
1:15:03
machine learning Yeah,
1:15:05
here's a summary of all the
1:15:07
things here's a summary of all
1:15:09
the things I should know that
1:15:11
happened in slack from the since
1:15:13
the last time I checked totally
1:15:16
up All right, let's have Dan
1:15:18
start us with picks Dan. What
1:15:20
are your picks? I
1:15:22
Don't have that much. You
1:15:24
know what? I'll mention just
1:15:26
the one thing. So I
1:15:28
think I picked this one
1:15:31
before a while back its
1:15:33
podcast called Revolutions
1:15:35
by Mike Duncan, the same guy
1:15:37
that did the history of
1:15:39
Rome. podcast. I'm very much a
1:15:41
history buff. And
1:15:44
Revolutions is like him going
1:15:46
through big revolutions in history,
1:15:48
starting from the English Revolution,
1:15:50
the less well -known English Revolution
1:15:52
with Oliver Cromwell, and then
1:15:54
going through the French Revolution,
1:15:56
the American Revolution, the French
1:15:58
Revolution, the Mexican Revolution,
1:16:00
the Russian Revolution, etc. And
1:16:03
it's highly recommended and very interesting,
1:16:05
but He's now doing something that's
1:16:07
kind of out there. I mean,
1:16:09
after the Russian Revolution, he got
1:16:11
to the point where if he
1:16:13
kept on talking about even more
1:16:16
modern revolutions, it would stop being
1:16:18
so much about history and being
1:16:20
more about politics, I guess. So
1:16:23
instead, what he did,
1:16:25
he kind of invented his
1:16:27
own revolution. He's talking
1:16:29
about a revolution on Mars.
1:16:32
in the 24th century or
1:16:34
23rd or 24th century or
1:16:36
something like that. So it's
1:16:38
really interesting how he's inventing
1:16:40
a revolution to talk about
1:16:42
and it's kind of like
1:16:44
a science fiction story in
1:16:46
a way. But it's
1:16:49
very engaging and I highly
1:16:51
recommend it. And
1:16:53
so that would be, if
1:16:56
you've not listened to any of
1:16:58
that podcast, I would recommend
1:17:00
to go back from the start. It's got
1:17:02
hundreds of episodes. It's
1:17:04
of course not the same type of
1:17:06
podcast as ours. It's not in
1:17:08
discussion. It's not an interview. It's all
1:17:10
scripted. He basically reads out the
1:17:12
script that he wrote, but he does
1:17:14
a ton of research. It's highly
1:17:16
engaging, very amusing. Very
1:17:18
informative again highly recommended and
1:17:21
the new stuff is recommended
1:17:23
as well and that would
1:17:25
be my pick Awesome Steve
1:17:27
what are your picks? All
1:17:30
right, so before I get into the
1:17:32
high point of every episode of the
1:17:34
dad jokes of the week I'm gonna
1:17:36
do a shameless plug here in terms
1:17:38
of a Email I got through linked
1:17:40
in from somebody who's a listener and
1:17:42
I won't share his name to protect
1:17:44
him from the abuse he would receive
1:17:46
for somebody who likes my dad jokes.
1:17:49
But he did say that, hey, I'm a
1:17:51
longtime listener of JS Jabber and enjoy
1:17:53
it very much, particularly your dad jokes. I
1:17:55
have retold a few of them and
1:17:57
got mostly eye rolling, but a few guffaws.
1:17:59
So like, OK, that's good. Making
1:18:02
some good progress there. So now
1:18:04
to the reason for that email,
1:18:06
the dad jokes of the week.
1:18:09
So recently I changed. The
1:18:12
voice you know on my my car's
1:18:14
GPS you can use the different accent and
1:18:16
stuff, you know, I change it to
1:18:18
mail So now it just says it's around
1:18:20
here somewhere keep driving. Oh
1:18:26
My dad joke isn't drumroll
1:18:28
is not working bumming me out
1:18:30
here. Okay, so I Was
1:18:32
ordering a pizza the other day
1:18:34
and they asked me if
1:18:36
I wanted to cut it into
1:18:38
four or eight slices I
1:18:40
said four, there's no way I could
1:18:42
eat eight. Ah,
1:18:45
there we go. All right. You kind of
1:18:47
reminded me of the fact that I say
1:18:49
that I don't care about the price of
1:18:51
gas because I always pay $50. Got
1:18:56
you. Sorry.
1:18:59
I'm going to give you that. Oh,
1:19:02
dang it. Drum joke
1:19:04
is not working yet. Sorry. I tried to give
1:19:06
you a drum joke. There we go. A little
1:19:08
delayed, sorry. We're back to Riverside again. The UIs,
1:19:10
there's a little delay there. It's messing me up.
1:19:12
So I gotta get back on the train again. You
1:19:15
need to play the drum roll before you
1:19:17
tell the joke. Yeah, there's enough of a
1:19:19
lag there. I gotta do it. So
1:19:22
two more. So my favorite suit, I come up
1:19:24
with a new superhero. Chuck's Marvel
1:19:26
shirt that he's wearing today sort of inspired me
1:19:28
this. I don't know if he's part of the
1:19:30
Marvel Universe or maybe another one. But
1:19:32
it's typo man because he writes
1:19:34
all the wrongs. That's
1:19:39
WRIT right and then finally
1:19:41
I Told my wife. I
1:19:43
wanted to be cremated. She
1:19:45
made an appointment for Tuesday
1:19:47
Well, there was them the
1:19:49
Monty Python meaning of life
1:19:51
movie where they you know
1:19:53
They knock on his door
1:19:56
and say you you've signed
1:19:58
to be an organ donor.
1:20:00
We've come to collect right
1:20:02
then Always look on the
1:20:04
bright side. That's a
1:20:06
different one though, right? All
1:20:08
right, I'm gonna go in with my picks.
1:20:11
I usually do a board game pick. I
1:20:14
haven't played anything new lately. I'm
1:20:16
gonna pick one that I haven't picked in a while. So
1:20:19
my wife and I went down
1:20:21
to St. George, Utah for the
1:20:23
Parade of Homes. We do this
1:20:25
every year on the last weekend
1:20:27
in February, basically. And
1:20:30
my sister -in -law and her husband
1:20:32
come and they always bring the
1:20:34
same game. And we really enjoy
1:20:36
playing. It's called the Quacks of
1:20:38
Quedlinburg. And what you're
1:20:40
doing is you're basically brewing
1:20:42
a potion. And
1:20:45
if you get too many of the wrong
1:20:47
element, the white elements in it, it'll blow
1:20:49
up. And so you're trying
1:20:51
to get as big a potion
1:20:53
as you can, often with
1:20:55
as many elements as possible, unique
1:20:57
elements, without blowing up
1:20:59
your pot. And then you get
1:21:01
bonuses for playing it. I
1:21:04
think we were playing with some of the extensions,
1:21:06
but the base game is actually pretty good. Um,
1:21:11
let me see. I didn't look it
1:21:13
up on board game geek. So
1:21:15
let me grab that real quick. And
1:21:17
I can tell you what the,
1:21:19
um, it's
1:21:21
got a board game weight of 1
1:21:23
.94. So if you like, if
1:21:25
you like a game that'll make you think,
1:21:27
but it's not like super complicated, that's
1:21:29
about where this clocks in. Um,
1:21:32
I think we played it an hour. Anyway,
1:21:35
fun game. So
1:21:37
I'm going to pick that. I'm
1:21:39
also going to, and
1:21:41
I'm just that guy
1:21:43
sometimes. So we
1:21:45
are on Riverside. It's been mentioned a
1:21:47
few times. We were on StreamYard. I
1:21:50
was having some issues with StreamYard,
1:21:52
working things out to get the payments
1:21:55
to work with them and things
1:21:57
like that. And eventually they
1:21:59
downgraded the account and deleted some
1:22:01
of the episodes that we hadn't
1:22:03
released yet. Oh,
1:22:05
wow. And so,
1:22:08
yeah. Caveat
1:22:10
Emptor, if you're going to go
1:22:12
use StreamYard, they have a great
1:22:14
tool, but they might just stab you in
1:22:16
the kidney. So
1:22:19
anyway, the other meeting
1:22:21
content is a big no -no,
1:22:23
I mean... Yeah. And
1:22:25
I tried to work with them, I
1:22:28
tried to figure out how to get it,
1:22:30
and they just came back and said
1:22:32
we deleted it, so... Anyway, yeah,
1:22:34
so if you're going to do this kind of
1:22:36
a thing, just be aware,
1:22:38
we've never had that issue with Riverside. I
1:22:41
did go through a bit of
1:22:43
a process just talking to Riverside.
1:22:45
And the process was them showing me all
1:22:47
the things it does now. And it does,
1:22:49
like all of the things that I had
1:22:51
wished it had done before, it
1:22:53
basically does now. You can actually edit
1:22:56
your episodes in Riverside. And
1:22:58
it uses AI to clean up
1:23:00
a bunch of stuff. automatically
1:23:02
generates your transcripts it'll
1:23:04
write you show notes you
1:23:07
know all all the
1:23:09
things that I've either been
1:23:11
paying for other systems
1:23:13
to do or wishing that
1:23:16
I had a system
1:23:18
to do it does all
1:23:20
those things so I'm
1:23:22
gonna pick Riverside Riverside is
1:23:24
not cheap either but
1:23:27
you know Anyway,
1:23:29
it's awesome. Anything's better
1:23:31
than some getting
1:23:33
your episodes deleted. Yeah.
1:23:36
So yeah, we lost a couple. I think we
1:23:39
lost more on Ruby Rogues than we did on
1:23:41
JavaScript Jabber, but we may have to reach out
1:23:43
a couple of people and say, hey, they deleted
1:23:45
it. We're going to re -record it. But
1:23:48
yeah, I'm delighted to be
1:23:50
on Riverside. And apparently it's
1:23:52
an Israeli company, which is
1:23:54
also cool. I
1:23:56
don't I don't know that I
1:23:59
care a ton where they're where
1:24:01
any company I use is based
1:24:03
Unless but I'm kind of curious
1:24:05
because it was streamed live. So
1:24:07
it's it's on YouTube. It's on
1:24:09
Oh, yes, we could get them
1:24:11
that way. That's a good. That's
1:24:13
a good Here we go troubleshooting
1:24:16
live on JavaScript. Yeah, we should
1:24:18
be able to pull them off
1:24:20
of YouTube because they're on they're
1:24:22
on the channel for the network And
1:24:25
so we can download them off of there. You
1:24:28
just saved us a little work. Thanks,
1:24:30
Dan. Besides
1:24:32
that... I still
1:24:34
remember that episode that
1:24:36
we did with
1:24:39
pressing the record button.
1:24:42
We've done that once or
1:24:44
twice. I've done that before,
1:24:46
yeah. That's a mistake
1:24:48
you only make a couple of times and
1:24:50
then you're... Very
1:24:55
first podcast episode I ever did. I did that
1:24:57
and had to re -record it. The guy was
1:24:59
gracious enough to re -record it with me. Oh, yeah.
1:25:02
Never forget that. Uh,
1:25:04
by the way, you know,
1:25:07
as an interesting pick, um, we
1:25:09
now use, uh, for some
1:25:11
reason, Google meet at work instead
1:25:13
of zoom, which is not
1:25:15
necessarily what I would have chosen,
1:25:17
but they have a good
1:25:19
feature, which is you can preconfigure.
1:25:21
uh, the session to be
1:25:23
recorded because you never remember to
1:25:25
press record in real time. Yeah.
1:25:28
So, so it's, it's one thing that I
1:25:30
do now when it, when it's sessions that I
1:25:32
scheduled that I know need to be recorded.
1:25:34
Yep. I think Lee needs to run. Do you
1:25:36
have anything you want to throw out real
1:25:38
quick? Um, I'll
1:25:40
give a shout out to Zed, my, uh, code
1:25:42
editor and mentioned it a little bit in
1:25:44
the podcast, but it's, it's been a joy to
1:25:47
use and, uh, they're shipping
1:25:49
tons of good improvements. So. Big fan.
1:25:51
All right, good deal. Well,
1:25:53
thanks again for coming. So
1:25:55
much interesting stuff going on. You're right
1:25:57
in the middle of it. thank you for having me. All
1:25:59
right, we'll wrap it up here folks. until
1:26:02
next time Max out
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More