Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
So, Hey
0:05
listeners, welcome back to No Priors. This
0:07
episode marks a special milestone. Today is
0:09
our hundredth show. Thank you so much
0:11
for tuning in each week with me
0:13
and Alad. And it's been an exciting
0:15
last couple of weeks in AI, so
0:18
we have lots to talk about. Why
0:20
don't we start with the news of
0:22
the hour, or you know, really the
0:24
last month at this point, and Deep
0:26
Seek. Alad, what's your overall reaction? Deep
0:28
Seek is one of those things which is
0:30
about... really important in some ways and
0:32
then also kind of what you'd expect
0:35
would happen from a trendline perspective. And
0:37
I think there was a lot of
0:39
interest around Deep Seek for sort of
0:41
three reasons. Number one, it was a
0:44
state-of-the-art Chinese model that seemed really caught
0:46
up with a number of things on the
0:48
reasoning side and in other areas relative to
0:50
some of the Western models. And
0:52
it was open source. Number two, there was
0:54
a claim that it was done very cheaply.
0:56
So I think the paper talked about like a...
0:58
a five and a half million dollar run. It's
1:00
sort of the end. And then lastly, I think
1:03
there's this broader narrative of who's really behind
1:05
it and what's going on and some perception
1:07
of mystery which may or may not be real.
1:09
And as you kind of walk through each one
1:12
of those things, I think on the first one,
1:14
you know, state of the art open source model
1:16
with some recent capabilities. and they actually did some
1:18
really nice work. You read through the paper, there's
1:21
some novel techniques in RL that they worked on
1:23
and that, you know, I know some other labs
1:25
just starting to adapt. I think some other labs
1:27
would also come up with some similar things over
1:29
time, but I think it was clear they'd
1:32
done some real work there. On the cost
1:34
side, everybody that I've at least talked to
1:36
who's savvy to, it basically views every sort
1:38
of final run for a model of this type
1:40
to roughly be in that kind of dollar range.
1:43
you know, $5 to $10 million or something like
1:45
that. And really the question is how much
1:47
work when in behind that, before they distill
1:49
down this smaller model. And by census,
1:51
everybody thinks that they were spending hundreds
1:54
of millions of dollars on compute leading
1:56
up to this. And so from that perspective,
1:58
it wasn't really novel. I think that's
2:00
sort of 20% drop in invidious stock
2:02
and everything else that happened as news
2:05
of this model spread was a bit
2:07
unwarranted. And then the last one, which
2:09
is sort of speculation of what's going
2:11
on, is it really a hedge fund, is
2:14
something else happening, like, you know, thoughts
2:16
a little bit, well, speculative. There's all
2:18
sorts of reasons that it is exactly
2:20
what they say it is, and then
2:23
there's some circumstances in which you could
2:25
interpret things more broadly. So that's
2:27
kind of my read things more probably.
2:29
to it, but to your point, it's
2:31
also like what you might expect,
2:33
especially given historical precedent with like
2:35
GPT 335 and then chat GPTs.
2:37
So like Deep Seek v3, like
2:39
the base model, big AI model,
2:41
pre-trained on a lot of internet
2:43
data to predict next tokens, like
2:45
that was out in December. right
2:48
and invidious stock did not crash
2:50
based on that news. So I
2:52
think it's just interesting to recognize
2:54
that like people obviously do not
2:56
just want raw likelihood of next
2:58
word in the streaming way and
3:00
the the work of post training
3:02
and making it more useful for
3:04
human feedback or more specific data
3:06
like high quality examples of prompts
3:08
and responses just like we've seen
3:10
with the chat models like chat
3:12
gBT the instruction fine tuning that
3:14
made this such a breakthrough experience
3:16
like That really mattered. And then
3:18
the, as you said, the like
3:20
narrative violation, release of R1, reasoning
3:22
model, as a parallel model to
3:24
like opening eyes, oh, one. I
3:26
think that was also the breakthrough
3:28
moment in terms of people's understanding
3:30
of this. Well, it's also like
3:33
20 years of China, America, technology dominance
3:35
narrative, right? Yes. Like I think I
3:37
think it was also kind of this
3:40
like iced around US versus China. you
3:42
know, worse, you know, the West is far ahead.
3:44
And so, you know, will they ever catch up,
3:46
etc. And this kind of showed that Chinese models
3:48
can get really fast. But I do think the
3:50
cost thing was a huge component of it. And
3:52
again, I think cost me have been in some
3:54
sense misstated or misunderstood at least. It's not
3:57
clear to me that like final model runs
3:59
at scale are. in this price range,
4:01
but I think what you were
4:03
saying before I completely agree with,
4:05
experimentation tends to be a multiple,
4:07
like you need to have tooling
4:09
and data work and the experimentation
4:12
and the pre-training run and data
4:14
generation cost and post-training and inference,
4:16
right? I'm sure I'm missing something
4:18
here. It seems very unlikely that
4:20
there hasn't been a large multiple
4:23
of $6 million spent in total.
4:25
But I think there was also
4:27
a narrative violation here in that,
4:29
like, even at a multiple of
4:31
$6 million, it's not like a
4:33
multi-billion dollar entry price or a
4:35
Stargate-sized entry price to go compete.
4:37
And I think that is something
4:39
that really shook the market. That should
4:42
be expected, because if you look at
4:44
the cost of training a GPT-4 level
4:46
model today versus two years ago, It's
4:48
a massive drop-in cost. And if
4:50
you look at, for example, inference costs
4:52
for a GPT-4 level model, somebody in
4:55
my team kind of worked out, and
4:57
in the last 18 months we saw
4:59
a 180x decrease in cost per
5:01
token for equivalent level models.
5:04
180x, not 180%, 180 times. So
5:06
the cost collapse on these things
5:08
is already quite clear. That's true
5:10
in terms of training equivalent
5:12
models. That's true in terms of
5:14
inference. And so, again, I kind of
5:17
just view this. roughly on trend and
5:19
maybe it's a little bit better and they've
5:21
come up with some advanced techniques which you
5:23
know they absolutely have but it does
5:25
feel to me a little bit overstated from
5:27
the perspective of how radical it is. I
5:29
do think it's striking what they did and
5:31
it kind of pushes US open source forward as
5:33
well which I think will be really important. But
5:36
I think people need to really look at
5:38
the broader picture of these curves that
5:40
are already happening. Do you think it's
5:42
proof that models are commoditizing the fact
5:44
that they're the last 18 months. There's
5:46
this really great website called the
5:48
artificial analysis dot AI that actually
5:50
allows you to look at the various
5:52
models and their relative performance across a
5:55
variety of different benchmarks and the
5:57
people who run this actually do
5:59
the benchmark. themselves. They'll go ahead and
6:01
retest it. Processes take the paper at
6:03
face value. And you see that for
6:05
a variety of different areas, these models
6:07
have been growing closer and closer in
6:09
performance. And there's different aspects of reasoning
6:11
and knowledge, scientific reasoning and knowledge, quantitative
6:14
reasoning and math. coding, multilinguality, cost per
6:16
token relative to performance. And they kind
6:18
of graph this all out for you.
6:20
And they show you by provider, by
6:22
state of the art model, how do
6:24
things compare? And things are getting closer
6:26
over time versus more dispersed over time.
6:28
So I think in general, the trend
6:30
line is already in this direction where
6:32
it seems like a lot of people
6:34
have moved closer and closer to Valencia
6:36
than they were, say, 18 months ago
6:38
where I think there was enormous disparities.
6:40
And obviously there are certain areas where
6:42
different areas where different models are quite
6:44
a bit ahead still. But on the
6:46
average things we're starting to net out
6:48
a little bit more and that may
6:50
change right? Maybe somebody comes out with
6:52
an amazing breakthrough model and they leapfrog
6:54
everybody else for a while But it
6:56
does seem like the market has gotten
6:58
closer than it than it was even
7:00
just like a year ago What do
7:02
you think it was even just like
7:04
a year ago? What do you think
7:06
is the value? It was even just
7:08
like a year ago? It's just like
7:11
a year than it was even just
7:13
like a year ago a year ago.
7:15
What's even just like a year ago
7:17
a year ago a year ago a
7:19
year ago? just like a year ago?
7:21
just like a year ago. Just like
7:23
a year ago. Just like a year
7:25
ago just like a year ago. Just
7:27
like a year ago. Just like a
7:29
year ago. Just like a year ago.
7:31
Just like a year ago. Just like
7:33
a year ago. Just like a year
7:35
ago. Just like a year ago. Just
7:37
like a year ago. Just like a
7:39
year ago. Just like a year ago.
7:41
Just like a year ago. Just like
7:43
then having something is dramatically better to
7:45
make a difference. So that could be
7:47
data labeling, it could be artificial data
7:49
generation, it could be other aspects of
7:51
post-training. So I think there's lots of
7:53
things that you could start doing when
7:55
you have a really good model to
7:57
help you. It could be coding and
7:59
sort of coding tools, it could be
8:01
all sorts of things. You know, there
8:03
is an argument that some people make
8:06
that at some point, as you move
8:08
closer and closer to some forum lift
8:10
off. that the more state of the
8:12
art the model is the more it
8:14
bootstraps into the next model faster and
8:16
then it just accelerates for you and
8:18
you stay ahead I don't know if
8:20
that's true or not I'm just saying
8:22
that's that's something that some people speculate
8:24
on sometimes are there other things that
8:26
you can think of? No I think
8:28
one thing you mentioned maybe if I
8:30
just extend it is like kind of
8:32
underpriced or like not yet understood enough
8:34
to market as a theory, which is
8:36
the idea that if you have a
8:38
like high quality enough base model to
8:40
be doing synthetic data generation for like
8:42
a next generation of model, that is
8:44
actually like a big leveler. Right. And
8:46
if you believe that there will be
8:48
continued availability of like more and more
8:50
powerful base models, that that's a big
8:52
leveler of the playing field in terms
8:54
of having like, you know, self-improving models.
8:56
And so that's an interesting thing that
8:58
people have not really, really talked about.
9:01
There are different ways to have value
9:03
from being at the frontier. One of
9:05
the things that was really interesting to
9:07
me was that like the Deep Seek
9:09
mobile app became like, you know, cop
9:11
contender in the app store for a
9:13
little bit. like cheapest most capable model
9:15
in the market actually matters to consumers
9:17
and they can tell and that will
9:19
drive consumer adoption and that's what happened
9:21
and like that's why you need to
9:23
have the soda model to create these
9:25
new experiences and there's a competing view
9:27
which is just like well like this
9:29
whole drama is quite interesting and people
9:31
are trying it as much because like
9:33
they want to see what the leading
9:35
Chinese AI model is like if it's
9:37
as good as open AI an anthropic
9:39
and such I definitely believe that leading
9:41
capability can lead to not product that
9:43
draws consumer attention, but I think in
9:45
this case it's more the latter. Two
9:47
other things that kind of happened this
9:49
past week was on the open AI
9:51
side. One is they released deep research.
9:53
So speaking of really interesting advancements and
9:56
capabilities, and then secondly they announced Stargate,
9:58
which was, you know, a massive series
10:00
of investments across AI infrastructure that was
10:02
announced with Trump at the White House,
10:04
what are your views on those two
10:06
things that in some sense kind of
10:08
overlap in terms of open AI really
10:10
advancing different aspects of... state-of-the-art terms of
10:12
what's happening right now. Deep research is
10:14
a really cool product. I encourage everybody
10:16
to try it. The biggest deal to
10:18
me is that it immediately raises the
10:20
bar for a number of different types
10:22
of knowledge work where like a, you
10:24
know, where I might have hired a...
10:26
median intern or analyst before, I mean
10:28
we don't do that here, but like
10:30
where one could hire a median analyst
10:32
or intern, I'm going to immediately comp
10:34
a bunch of their work to what
10:36
you could do with deep research and
10:38
like your ability to do better with
10:40
deep research. And like the comp is
10:42
hard. I'd say it is a really
10:44
valuable product. I expect other people to
10:46
adopt this pattern too, but I think
10:48
it's a really novel innovation. Kudos to
10:50
the team. I would say I think
10:53
it is more useful, at least to
10:55
me, upon first blush. I'm sure they're
10:57
working on this. In domains, I understand
10:59
less to do surveying and to like
11:01
make sure I have a comprehensive view
11:03
and understand who the experts are versus
11:05
like in an area where I feel
11:07
like I have a lot of depth.
11:09
I take issue with its implicit authority
11:11
ranking and its ability to determine like
11:13
what ideas out there, what on the
11:15
web is good and not when it's
11:17
doing its search. from at least my
11:19
initial prompting and experimentation in domain. I'm
11:21
like, oh man, like you're really gonna
11:23
have to audit the outputs here. It
11:25
will orient you, but you can't take
11:27
as given like many of the claims
11:29
here. This is the AI form of
11:31
Murray, Murray Gelman Amnesia, which was coined
11:33
by the guy who wrote Jurassic Park.
11:35
I can't remember his name as pronounced
11:37
Gelman or Gelman. Murray Gelman was a
11:39
physicist who came up with quarks and
11:41
a few other things. He was a
11:43
Nobel Prize winner and was considered widely
11:45
brilliant. And it was named after him
11:48
by Michael Crichton, which was basically the
11:50
idea is if you're reading a page
11:52
in the New York Times about something
11:54
you really understand, and you're like, oh,
11:56
this is so dumb, and how could
11:58
they write this, and I don't believe
12:00
it. And then you just turn the
12:02
page and you look at something, you
12:04
look at something, you look at something,
12:06
you don't believe it. And then you
12:08
just turn the page and you look
12:10
at something, you don't know, and you
12:12
look at something, and you look at
12:14
something, or not know, and... You know,
12:16
if it's getting sort of expertise wrong
12:18
in a domain, I understand. Does that
12:20
mean they're also getting it wrong? And
12:22
domains, I don't understand, but of course
12:24
we never apply that as people. We
12:26
just assume, of course, it's right. domains
12:28
that we don't understand, which I think
12:30
is really interesting psychologically, but it also
12:32
has real implications in terms of how
12:34
people will use AI in the future
12:36
in general, because these things will become
12:38
the definitive source of a lot of
12:40
people's primary information. It's in some senses
12:43
really overlapping with some of the search
12:45
use cases in really deep ways. And
12:47
you have something where the sources traditionally
12:49
have been less evident. I know that
12:51
people are working on different ways to
12:53
surface, but the primary sources are for
12:55
some of these things. It does have
12:57
really interesting implications for how you think
12:59
about knowledge in the modern era as
13:01
you're using AI, especially as you're using
13:03
agents, so they just go and do
13:05
stuff and then report back and you
13:07
know what they did. So I think
13:09
it's a very interesting topic. I'm not
13:11
sure how you solve that from like
13:13
a UX perspective, or maybe it's like
13:15
somewhat unsolvable given, you know, it also
13:17
reflects like what is knowledge on the
13:19
web. It really does feel like a
13:21
really dangerous thing from sort of a
13:23
really dangerous thing from sort of a
13:25
propaganda and censorship censorship perspective and censorship
13:27
perspective. You know, social networks were kind
13:29
of view one of that, or maybe
13:31
certain aspects of the web were V1
13:33
and social networks were B2, and this
13:35
is kind of the big version, because
13:38
it's a mix of search. It's like
13:40
if you mix Google with Twitter, with
13:42
Facebook, with everything else that you're using,
13:44
with all the media outputs, or media
13:46
outlets, all into one single device that
13:48
you interrogate, that's kind of where these
13:50
AIs are going. And so the ability
13:52
to control the output of these things
13:54
is extremely powerful, but also very dangerous.
13:56
You know, that's why I'm kind of
13:58
happy that we're in this multi-AI world,
14:00
multi-company world. There's a way to offset
14:02
that, and that's where Obitsworth becomes incredibly
14:04
important if you worry about civil liberties.
14:06
What do you think about Stargate? Maybe
14:08
there's like a couple different implied questions
14:10
in like... Stargate, right? One is how
14:12
much does it matter in the race,
14:14
like to continue to have access to
14:16
the largest infrastructure? I'm going to skip
14:18
the question about like whether or not
14:20
it's real, like there's a lot of
14:22
money involved here. I think another question
14:24
is how deep are the capital markets
14:26
to continue funding this stuff? Maybe a
14:28
final one is like just the involvement
14:30
of different sovereigns or quasi-soft. in this,
14:32
like I don't know if I have
14:35
a strong opinion on the latter two,
14:37
the way I think about the dynamic
14:39
of like how much does the capital
14:41
matter and like the implied like how
14:43
like do we continue to see scaling
14:45
on pre-training be a dominant factor is
14:47
I think of it really as like
14:49
uncertainty rather than than risk. Right, like
14:51
if you think about capabilities as emergent
14:53
and people not being sure what sort
14:55
of algorithmic efficiencies counteract like the, you
14:57
know, improvements that will come from more
14:59
scale and the things you can do
15:01
to generate new data to improve in
15:03
other vectors and what we're going to
15:05
get out of test time scaling, like
15:07
I just think it's very hard to
15:09
predict, but I failed to see a
15:11
scenario where anybody trying to build AGI,
15:13
any of the large research labs wouldn't
15:15
want the biggest cluster that they could
15:17
have. if it was free, right, or
15:19
if the capital was available to them.
15:21
And that to me says more than
15:23
anything else, like, we're going to get
15:25
more out of pre-training. Is it going
15:27
to be as efficient? Like, I think
15:30
that's unlikely. We're like a little bit
15:32
delayed on this, but we'll just give
15:34
ourselves a free pass, given it's episode
15:36
100. Predictions for 2025. Happy New Year!
15:38
It's February, but it's like anything happening.
15:40
It's like the Larry David episode. Yeah,
15:42
basically there's some statute of limitations of
15:44
how late into the year you can
15:46
say happy new year. We're now a
15:48
month in, so of course, we're way
15:50
over that. We should probably say happy
15:52
Valentine's Day, even though we're like two
15:54
weeks early. No, a lot. You don't
15:56
like, what was the vibe for 2025
15:58
is you can just do things that
16:00
are likely to happen. First, the foundation
16:02
model market should at least partially consolidate.
16:04
And it may be in those sort
16:06
of ancillary areas, so that's image, video,
16:08
voice, you know, a few other areas
16:10
like that, maybe some secondary LLCs or
16:12
foundation models will also consolidate. So I
16:14
do think we're going to see a
16:16
lot of consolidation, particularly if the FTC
16:18
is a little bit more friendly than
16:20
the prior regime. We'll also see some
16:22
expansion of sort of new races in
16:25
physics, biology, and materials, and the like.
16:27
So I think that that will happen
16:29
alongside just general scaling of foundation models
16:31
will continue. And that includes reasoning and
16:33
that includes other things. So that's one
16:35
big area. I think the second area
16:37
is we're going to see. vertical AI
16:39
apps continue to work at scale. It's
16:41
hard to be for legal, that could
16:43
go on, and Sierra for customer success,
16:45
and a variety of folks for code
16:47
gen, for medical scribing, etc. So I
16:49
think it'll be the era of vertical
16:51
apps, and I think a subset of
16:53
those will start adding more and more
16:55
agentic things to them. You know, some
16:57
folks like cognition are doing that. Third
16:59
would be self-driving, will get a lot
17:01
of attention, Robo taxis, etc. Applied intuition
17:03
I think is kind of a dark
17:05
course to watch more generally on the
17:07
automotive stock. And then I guess fourth
17:09
is that some consumer things I think
17:11
will get really large scale experiments happening
17:13
in a way that hasn't happened until
17:15
now. So I'm starting to see consumer
17:17
startups. I'm starting to see more consumer
17:20
applications from incumbents. Like I actually think
17:22
we're we're going to see a little
17:24
bit of a resurgence in consumer. It
17:26
may take a while, but I think
17:28
that'll happen. And then lastly I think
17:30
there's things that we all know will
17:32
happen. But it may start to see
17:34
some interesting behavior on agents, maybe some
17:36
early robot stuff, you know, but it'll
17:38
be one of those things where it's
17:40
more going to be the glimmer of
17:42
how this thing will work versus the
17:44
whole thing. But I think some of
17:46
those developments will be very exciting. So
17:48
those would be my five predictions for
17:50
25. How about you? What do you
17:52
got? We agree on a number of
17:54
different things. I think the whole like
17:56
definition for agent is super fuzzy, but
17:58
if we just think of it as
18:00
like do multi-step tasks successfully in some
18:02
sort of end-user environment and take action
18:04
beyond just generate content, like we're already
18:06
seeing that I think we're to see
18:08
that more broad. as you were sort
18:10
of alluding to, but I think companies
18:12
get better and product companies or vertically
18:14
integrated companies, they get better at handling
18:17
failure cases and managing state intelligently. And
18:19
so we're already seeing that in security
18:21
and support and an SRE. And I
18:23
think that will continue to happen. This
18:25
already happened in CodeGen, as you were
18:27
sort of alluding to, but I think
18:29
companies doing co-pilot products will naturally extend
18:31
to agents. They'll just try to do
18:33
more, right, and take more on. I
18:35
think one of the inputs to broader
18:37
consumer experimentation as you. describe as just
18:39
like way more capable like small low
18:41
latency models. I don't think we have
18:43
like any monotonic movement toward compute at
18:45
the edge like I think when people
18:47
are like edge compute for the sake
18:49
of edge compute I'm like nobody cared.
18:51
Right, but if you can make that
18:53
transparent to the user and it's free,
18:55
then I think your ability to ship
18:57
things that are free is obviously unlocked
18:59
and I think that's cool. This is
19:01
something that will be a lot of
19:03
web apps, you know, so I don't
19:05
think it has to necessarily be on
19:07
device. Consumer products, I add a later
19:09
point, there won't be some, but I
19:12
also just think it's just going to
19:14
be things running on the internet that
19:16
just become part of your application stack
19:18
on your browser that will do really
19:20
interesting. Yeah, well stuff in the browser
19:22
can also use the GPU, but I
19:24
just think that the the ability to
19:26
run locally might be a big unlock
19:28
for them. I don't know if you
19:30
and I disagree on timeline. I think
19:32
we're going to see technical proof of
19:34
breakthroughs in robotics and in generalization this
19:36
year, though not deployments. I think one
19:38
thing that's like maybe miss. priced just
19:40
because it's very new is like people
19:42
don't really know how to think about
19:44
reasoning. I would claim that one thing
19:46
is as much improvement in reliability as
19:48
complexity of task. One like mistake that
19:50
entrepreneurs and investors make that I have
19:52
made is like you look at something
19:54
and it's not working and like the
19:56
issue is it is like a technical
19:58
issue and then you like assume it's
20:00
not going to work but I think
20:02
an AI you have to like keep
20:04
looking again and again and again because
20:07
stuff can begin to work really quickly.
20:09
Maybe one last one with that I'm
20:11
just, I've seen like small examples of
20:13
with our like embed program and also
20:15
broadly in the portfolio is because you
20:17
have this diffusion of innovation like not
20:19
just with customers but with the types
20:21
of entrepreneurs that go take something on
20:23
and we're like beyond just the tip
20:25
of the spear now and more and
20:27
more people like I can do stuff
20:29
with AI. I think we're going to
20:31
get more... smart data generation strategies for
20:33
different domains where you need like domain
20:35
knowledge as well as understanding of AI.
20:37
So examples here could be like biology
20:39
and material science. Like you needed the
20:41
set of scientists who are capable of
20:43
innovating on data capture, which might literally
20:45
be like a biotech innovation versus a
20:47
computer science innovation to understand the potential
20:49
of deep learning and that the bottleneck
20:51
was data and then the type of
20:53
data you were like looking for. And
20:55
I think that is happening. That's really
20:57
exciting. This may be the year where
20:59
we see something really interesting happen on
21:01
the health side. As an example, where
21:04
you need specialized data, but it's not
21:06
as hard as, you know, the atomic
21:08
world of, you know, biomolecular design or
21:10
something. Anything else we should talk about?
21:12
The facial hair? We catch, should I
21:14
bring it back? I liked the beard.
21:16
I like the beard and hat era.
21:18
Oh, interesting. I should go back to
21:20
that. The last question for today. We're
21:22
on episode 100. What do you think
21:24
the state of the world will be
21:26
relative to AI when we're at episode
21:28
200? I don't think we're part of
21:30
this anymore. I think it's just like
21:32
two agents going back and forth teaching
21:34
us stuff and like you and I
21:36
are no longer the hosts or the
21:38
choosers of topics were just like nodes
21:40
into the network. Will they be as
21:42
good-looking as us? Yeah, and they'll be
21:44
better. Computers. We'll see. Still like some
21:46
art more than submit or mid journey
21:48
art of those and be just beautiful,
21:50
beautiful things on there. Okay, episode 200,
21:52
that's like what? Well, it's almost two years, it's weekly, so.
21:54
I think we're either in the RLHF farm or we're like sitting
21:57
on a... in a visa -abundance. That's a prediction.
21:59
You heard it here first. Well,
22:01
hopefully I'll see you here 200 hopefully
22:03
I'll I think episode not as great.
22:05
a visa. I And all the listeners
22:07
too. Thanks is not right. Thanks great. Okay. And
22:09
all the Find us on Twitter guys. All
22:12
right. Thanks, Subscribe to our YouTube channel
22:14
if you want to see our
22:16
faces. Follow the show on Apple
22:18
YouTube channel Spotify, you or wherever you
22:20
listen. That way you get a
22:22
new episode every week. And sign
22:24
up for emails or find transcripts
22:26
for every episode at you get a new.com. week.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More