Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Today on the A.I. Daily A.I.
0:02
Brief, X.A. has merged with X,
0:04
and here's what it means for
0:06
the broader frontier lab
0:08
battle. Before that in the
0:11
headlines, A.I. sees its first big
0:13
IPO this year, and we talk
0:15
about how it went. The A.I.
0:18
Daily Brief is a daily podcast
0:20
and video about the
0:22
most important news and
0:24
discussions in A.I.I. News You Need
0:27
in around five minutes. We kick
0:29
off today with another big story
0:31
from Friday. Our main story is
0:33
about XAI combining with X. This
0:36
one is about Corweave's IPO, which unfortunately
0:38
turned into a bit of
0:40
an anti-climax. The company stock
0:42
dropped by 2.5% at launch on
0:44
Friday after fundraising targets were heavily
0:47
downsized. Corweve raised 1.5 billion from
0:49
the share sale, but had initially priced
0:51
the IPO to raise over 2.7 billion.
0:53
At one point in the planning phase, the stock
0:55
was to be priced at $55 per share, but
0:58
ended up going out at just $40. The stock
1:00
ended the day flat, closing at $40
1:02
and receiving no IPO pop. Bloomberg sources
1:04
said that half of the shares went
1:06
to the three largest investors in the deal,
1:09
with 90% going to the top 15. One of
1:11
those large investors was invidia who
1:13
took a $250 million allocation to add
1:15
to their pre-existing 6% stake in the
1:17
company. CEO Michael and Trader said that
1:20
without invidia, the IPO quote wouldn't have
1:22
closed. He also added if 27 others
1:24
didn't show up it wouldn't have closed.
1:26
And while the headlines are calling this
1:28
a dud of an IPO and pointing to
1:30
Coreweave specifically in AI more generally, what
1:33
in traders pointing out is that this is
1:35
a terrible moment to go public. The
1:37
NASDAQ index also had a 2.5% overall
1:39
drop on Friday contributing to a 13%
1:41
decline this month. Risk assets have been
1:44
struggling mightily in a market that
1:46
is characterized by insecurity and volatility
1:48
and hyped up tech IPOs squarely
1:50
in the risk asset category. Now to
1:52
the extent that people are looking at what
1:55
wasn't working about the IPO in the context
1:57
of Corweaven AI rather than larger market volatility,
1:59
people are reading this either as a
2:01
potential signal for an AI infrastructure bubble
2:03
or they're pointing to some idiosyncratic warning
2:05
signs in Corweave specifically. On the AI
2:08
bubble conversation Corweave co-founder Brandon McBee is
2:10
dismissive saying this conversation around an AI
2:12
bubble seems to come up every three
2:14
to six months or so and then
2:16
it drops away. What we see on
2:18
the ground and what I'm sure you're
2:20
hearing in Silicon Valley is just consistent
2:23
growing demand. Obviously if you are a
2:25
regular listener to this show you'll know
2:27
that that opinion is much closer to
2:29
my view than the idea that there's
2:31
some massive bubble just waiting to deflate.
2:33
The core weave specific problems might be
2:35
a little bit more difficult to brush
2:38
off. Core weave already has quite a
2:40
bit of debt and may need to
2:42
raise more to make up for the
2:44
shortfall in the IPO, that is of
2:46
course if they actually needed that capital.
2:48
The company faces repayments of $7.5 billion
2:50
by the end of next year, although
2:53
they could also be able to refinance.
2:55
Like many companies in this AI infrastructure
2:57
space, they also have a highly concentrated
2:59
customer base. Microsoft represented 62% of their
3:01
revenue last year, with a further 15%
3:03
coming from an unknown single large customer.
3:05
Microsoft has already walked away from their
3:08
option to extend leases with the company.
3:10
However, Corweve did seal a big deal
3:12
with Open AI in the lead-up to
3:14
the IPO. Bloomberg's Dave Lee commented that
3:16
unlike other big cloud providers, Corweve really
3:18
doesn't have anywhere to hide. He wrote,
3:20
While Corweave has some unique vulnerabilities, the
3:23
bigger picture here is that its accounts
3:25
will finally lay bare in quarterly reports,
3:27
the brutal economics of an industry that
3:29
is burning through an unprecedented amount of
3:31
cash in pursuit of some kind of
3:33
lucrative application, nobody has quite figured out
3:35
yet figured out yet, nobody has quite
3:38
figured out yet yet. Corweave can't obfuscate
3:40
growth in AI services, or can't obfuscate
3:42
growth in AI services, or can it
3:44
hide the interconnectedness of the industry, or
3:46
if the interconnected Joe Sai worn this
3:48
week. It's on the balance sheet of
3:50
core weave where the clues might emerge,
3:53
written, for the first time, in plain
3:55
black and white for all to see.
3:57
I continue to be skeptical of this
3:59
type of analysis, but if for no
4:01
other reason that as a meta understanding
4:03
of where the market is, it's worth
4:05
noting this is a fairly common opinion.
4:08
Speaking of... of IPOs, Proplexity CEO Arvan
4:10
Shritovas is denying that the company is
4:12
under financial pressure and needs to rush
4:14
to an IPO. A few days ago,
4:16
a Reddit user called Nothing Ever Happened,
4:18
aired out a theory on Proplexity's subreddit,
4:20
writing, I've recently noticed Proplexity making lots
4:23
of changes to cut costs. My theory
4:25
is that they're doing horribly financially. Those
4:27
charges included an insider telling them that
4:29
all funding for marketing and partnerships has
4:31
been paused. and layoffs which the editor
4:33
discovered, quote unquote, by digging into LinkedIn
4:35
profiles and finding a lot of former
4:38
employees. The key complaint were changes to
4:40
how the services uses auto mode, which
4:42
now removes model selection from the user
4:44
during follow-up questions. The editor claimed that
4:46
their follow-up questions were always answered by
4:48
the default cheaper model, rather than a
4:50
high-end reasoning model like OpenAIs01. Unless we
4:53
think that this is just one complaining
4:55
user, perplexity CEO are of Unshrinivos took
4:57
to post a response, which he also
4:59
copied over onto X as well. Now
5:01
he didn't reference the original post, but
5:03
did give some plausible explanations for each
5:05
of the points, and included several others
5:08
addressing complaints about degradation of service. Regarding
5:10
auto mode, Trinovas claimed that it was
5:12
a UX improvement to remove the model
5:14
selection and follow-up questions. He wrote that
5:16
the goal is to quote, let the
5:18
AI decide for the quick fast-answer query,
5:20
or a slightly slower multi-step pro search
5:23
query, or slow reasoning mode query, or
5:25
a really slow deep research query. The
5:27
long-term future is that decides the amount
5:29
of compute to apply to a question.
5:31
and maybe clarify with the user when
5:33
not super sure. Our goal isn't to
5:35
save money and scam you in any
5:38
way. It's genuinely to build a better
5:40
product with less clutter and simple selector
5:42
for customization options for the technically adept
5:44
and well-informed users. This is the right
5:46
long-term convergence point. And by the way,
5:48
I will say at this point that
5:50
one of the big UI-UX complaints around
5:53
these services has been model selector type
5:55
of issues. This is something that Tim
5:57
Altman and Chatmen and ChatGPT have discussed
5:59
extensively extensively extensively as well. Users hate
6:01
the fact that they have to look
6:03
through and understand which of the models
6:05
is good for different things. So I
6:08
don't think it's some conspiracy theory to
6:10
think that perplexity... which is an extremely
6:12
UI-U-X focused company, is just trying to
6:14
improve that part of the experience. Now
6:16
maybe more pointedly, was this paragraph from
6:18
Shinavas who wrote, are we running out
6:20
of funding and facing market pressure to
6:23
IPO? No, we have all the funding
6:25
we've raised and our revenue is only
6:27
growing. The objective behind auto mode is
6:29
to make the product better, not to
6:31
save costs. I've learned it's better to
6:33
communicate more transparently to avoid any incorrect
6:35
conclusions. Re IPO, we have no plans
6:38
of IPO before 2028. Ultimately, when it
6:40
comes to perplexity, I think that their
6:42
bigger challenge is that every other frontier
6:44
lab also wants to be the gateway
6:46
to search. They're all working to improve
6:48
not only their underlying models, but also
6:50
their search experience. That, more than any
6:53
potential cost savings from auto model selection,
6:55
that more than any potential cost savings
6:57
from auto model selection, is going to
6:59
be the challenge that perplexity has to
7:01
overcome. As of recent notes, perplexity claims
7:03
that they've crossed $100 million in annualized
7:05
and annualized revenue. For now that that
7:08
is going to do it for today's
7:10
AIDily Brief Headlines edition, next up the
7:12
main episode. Today's episode is brought to
7:14
you by Vanta. Trust isn't just earned,
7:16
it's demanded. Whether you're a startup founder
7:18
navigating your first audit or a seasoned
7:20
security professional scaling your GRC program, proving
7:23
your commitment to security has never been
7:25
more critical or more complex. That's where
7:27
Vanta comes in. Businesses use Vanta to
7:29
establish trust by automating compliance needs across
7:31
over 35 frameworks like Sock 2. and
7:33
ISO 2701. Centralized security workflows complete questionnaires
7:35
up to 5X faster and proactively manage
7:38
vendor risk. Vanta can help you start
7:40
or scale up your security program by
7:42
connecting you with auditors and experts to
7:44
conduct your audit and set up your
7:46
security program quickly. Plus with automation and
7:48
AI throughout the platform Vanta gives you
7:50
time back so you can focus on
7:53
building your company. Join over 9,000 global
7:55
companies like Atlacian, Cora and Factory who
7:57
use Vanta to manage risk and improve
7:59
security in real time. For a limited
8:01
time, this audience gets $1,000 off Vanta
8:03
at vanta.com. That's vanta.com/N-L-W for $1,000 off.
8:05
All right, a ideally brief listeners. Today
8:08
I'm excited to tell you about the
8:10
disruption incubator. One of the things that
8:12
our team sees all the time is
8:14
a lot of frustration from enterprises. There's
8:16
a fatigue around small incremental solutions, a
8:18
concern around not thinking big enough, tons
8:20
of bureaucratic challenges, of course, inside big
8:23
companies. And frankly, we just hear all
8:25
the time from CEOs, CTOs, other types
8:27
of leaders that they want to ship
8:29
some groundbreaking AI agent or product or
8:31
feature. In many cases, they even have
8:33
a pretty well thought out vision for
8:35
what this could be. Their teams are
8:38
just not in an environment conducive to
8:40
that type of ambition. Well, it turns
8:42
out our friends at Fractional have experienced
8:44
the exact same thing. Fractional are the
8:46
top AI engineers specializing in transformative AI
8:48
product product development and to answer this
8:50
particular challenge. They have, with perhaps a
8:53
little bit of help from super intelligent,
8:55
set up what they're calling the disruption
8:57
incubator for exactly this type of situation.
8:59
The idea of the disruption incubator is
9:01
to give a small group of your
9:03
most talented people an overly ambitious mandate,
9:05
something that might have taken one to
9:08
two years within their current construct. Send
9:10
them to San Francisco to work with
9:12
the team at fractional, and within two
9:14
to three months, ship something that would
9:16
have previously been impossible. The idea here
9:18
is that you are not just building
9:20
some powerful new agent or AI feature,
9:23
but you're actually investing in your AI
9:25
leadership at the same time. If this
9:27
is something interesting to you, send us
9:29
a note at Agent at B super
9:31
dot AI with the word disruption in
9:33
the title and we will get right
9:35
back to you with more information. Again
9:38
that's Agent at B super dot AI
9:40
with disruption in the subject line. Welcome
9:42
back to the AI Daily Brief. Today
9:44
we are discussing the state of the
9:46
battle among the frontier labs for AI
9:48
supremacy and the specific context for the
9:50
conversation is... Late on Friday, we got
9:53
news, Elon Musk's X-A-I, which is of
9:55
course the parent company of Groc, and
9:57
the home of all his generative AI
9:59
adventures. had acquired X, which is of
10:01
course the former Twitter. The announcement
10:03
post which came out at 5.20
10:05
p.m. Eastern Time on Friday read,
10:07
XAI has acquired X in an
10:09
all-stock transaction. The combination values XAI
10:12
at 80 billion and X at 33 billion,
10:14
which is 45 billion less 12 billion in
10:16
debt. Since its founding two years ago,
10:18
XAI has rapidly become one of
10:20
the leading AI labs in the
10:22
world, building models and data centers
10:24
at unprecedented speed and scale. X is
10:26
the digital town square where more than 600
10:29
million active users go to find the real-time
10:31
source of ground truth and in the last
10:33
two years has been transformed into one of
10:35
the most efficient companies in the world positioning
10:37
it to deliver scalable future growth. X AI
10:40
and X's futures are intertwined. Today we
10:42
officially take the step to combine the
10:44
data models compute distribution and talent. This
10:46
combination will unlock immense potential by blending
10:48
X AI's advanced AI capability and expertise
10:50
with X's massive reach. The combined company
10:53
will deliver smarter, more meaningful experiences to
10:55
billions of people while staying true to
10:57
our core mission of seeking truth and
10:59
advancing knowledge. This will allow us to
11:01
build a platform that doesn't just reflect the
11:04
world, but actively accelerates human progress.
11:06
I would like to recognize the hardcore
11:08
dedication of everyone at XAI and X that has
11:10
brought us to this point. This is just the
11:12
beginning. Now on the one hand, the companies
11:15
are both private, and Elon presumably has the
11:17
support of investors, so we can pretty much
11:19
do hear what he wants. Still the deal is
11:21
far from normal. The Wall Street Journal reports,
11:23
the new valuations were determined during
11:25
negotiations between the two Musk arms,
11:27
which both had the same advisors,
11:29
people familiar with the matter said. The
11:31
last time X AI raised money was in December,
11:33
and it was thought to be valued at
11:35
around $40 billion, so this deal implies a
11:37
doubling in three months. That's obviously quite
11:40
an acceleration, but not necessarily totally out
11:42
of sync with the world of AI.
11:44
The journal points out that this isn't the first
11:46
time Elon has done something like this. Back in
11:48
2016, Elon used Tesla stock to buy
11:50
his solar energy company, Solar City. Musk
11:52
is apparently a great dealmaker when
11:55
he's negotiating with himself. Still, if
11:57
you hold aside the mechanics and the
11:59
valuations, it's... very clear why this deal
12:01
makes sense for the two companies. Musk
12:03
has already been open that the Grock
12:05
model was trained on X data, and
12:07
the chatbot is now embedded in the
12:09
platform as a native assistant. The two
12:11
platforms are already deeply entwined in their
12:13
user experience, their resources, and even some
12:15
of their personnel. For X, the merger
12:17
takes the pressure off of it to
12:20
thrive exclusively as an independent social media
12:22
platform. Advertising revenue has not been without
12:24
challenges since Musk took over in 2022
12:26
over in 2022. And while there have
12:28
been signs that the numbers had recovered
12:30
had recovered in the past few months
12:32
had recovered in the past few months,
12:34
X now has the additional economic value
12:36
of simply becoming a data repository and
12:38
a portal for X-A-I. Opinions on this
12:40
deal, as with so much in Elon
12:42
Musk world, basically come down to what
12:44
you think of Elon Musk. The pro-
12:46
Musk side is represented by posts like
12:49
this one from Fernando Cow, which writes,
12:51
When Musk bought Twitter, everyone was confused.
12:53
Why would a man focused on electric
12:55
vehicles, space travel, and neural interfaces want
12:57
a social media platform? But, a social
12:59
media platform. X had something incredibly valuable
13:01
that most AI companies desperately need. Real-time,
13:03
human-generated diverse data from 600 million active
13:05
users. This is the perfect fuel for
13:07
AI models, and it's exactly what XAI
13:09
needs to compete with Open AI and
13:11
Enthropic. Investor Chamath Palahapatiya writes, the currently
13:13
best-ranked consumer AI model has just acquired
13:15
the most complete corpus of scaled real-time
13:18
information on the internet. The data will
13:20
be a part of the pre-training to
13:22
make the models XAI makes even more
13:24
differentiated more differentiated. This is a smart
13:26
move in a moment when other model
13:28
makers are caught up and slowed down
13:30
in copyright lawsuits, like open AI for
13:32
training data or pre-training quality like meta.
13:34
On the flip side is the common
13:36
take represented by this one from Compound
13:38
248. They write, it's hard to know
13:40
what to make of XAI buying X.
13:42
My gut is its max of desperation.
13:44
On the surface, the deal values X
13:46
flat to Twitter's 2022 takeout value, despite
13:49
massive underperformance on financial metrics. Sounds like
13:51
a win for X shareholders. But it
13:53
is a stock deal and ex-owners will
13:55
now own 29% of combined company shifting
13:57
from a near pureplay social media bet
13:59
to an AI bet that's very much
14:01
on the complex a deluded share in
14:03
Twitter. Yes, XAI is a powerful model,
14:05
but not unusually so. XAI has de
14:07
minimis revenue, is hemorrhaging cash, and its
14:09
prospective business opportunity seems very difficult given
14:11
the relevant competition. A has a head
14:13
start, B is a murderer's row, and
14:15
C has existing business and go-to-market strategies
14:18
to build on. X's $12 billion of
14:20
very high-cost debt isn't going away. It
14:22
will be in perpetual cash raising mode
14:24
until that changes, which leaves it risk
14:26
to the whimss of the fund- I
14:28
wouldn't bet against Elon, but I'd be
14:30
very nervous as a combined company owner.
14:32
Now honestly, this is actually fairly middle
14:34
of the road. A more reflective antagonistic
14:36
take comes from Adam Cochran who wrote,
14:38
in other words, Musk used his pumped-up
14:40
XAI stock to pay multiple times over
14:42
value for XAI stock to pay multiple
14:44
times over value for X, but still
14:46
taking 11 billion loss on the transaction
14:49
while screwing over XAI investors and X
14:51
investors and to sell your data to
14:53
his own AI company. But otherwise it's
14:55
not a breakthrough model and it's terribly
14:57
monetized. Again, the middle of the road
14:59
take really focuses on data. Ralph Paul
15:01
from Real Vision writes, it was always
15:03
about the data for the AI. I
15:05
talked about this when he first bought
15:07
X and said it was a bargain
15:09
back then due to the AI training
15:11
data. And the AI is all about
15:13
the robots and the robots are all
15:15
about Mars, as is everything else. I
15:18
think that while the focus on data
15:20
makes sense, people might be underestimating the
15:22
value of the integrated experience with Twitter
15:24
content. To the extent that these companies
15:26
are all competing to be the next
15:28
generation search portal where people begin their
15:30
internet experience, Grock offers something fundamentally different
15:32
that none of the competitors, Google, Open
15:34
AI, Antropic, Proplexity, etc., offer, which is
15:36
the ability to integrate the meta-conversation into
15:38
deep research. I think I have a
15:40
particular point of view on this given
15:42
that I've now built two podcasts, for
15:44
both of which a major value proposition
15:46
is the fact that we don't just
15:49
talk about the discussion around the news.
15:51
The thing that takes longest in producing
15:53
both the breakdown in the AI Daily
15:55
Brief is going through and understanding dozens
15:57
if not hundreds of different opinions around
15:59
anything that's happening. in order to be
16:01
able to synthesize that into a coherent
16:03
view not just of what actually happened,
16:05
but what's likely to happen next based
16:07
on how people are receiving that news.
16:09
This is again not commenting on any
16:11
of the questions of self-dealing or valuations
16:13
or anything like that. I just think
16:15
that the Grock Twitter merger has value
16:18
beyond just a pre-training data play. So
16:20
what's happening though beyond Grock if we're
16:22
using this as a way to catch
16:24
up on the state of the AI
16:26
frontier lab battle? While elsewhere in the
16:28
AI space, the biggest players seem to
16:30
be ducking it out for leadership in
16:32
the major verticals. We have, of course,
16:34
discussed extensively open AI's new image generator,
16:36
which has been just absolutely sucking all
16:38
of the oxygen out of the room.
16:40
But because it was so dominant last
16:42
week, a lot of people missed the
16:44
fact that anthropics dominance in coding seems
16:46
to be contested for the first time
16:49
in months. The same GPT40 update that
16:51
made chatGPT so much better at image
16:53
coding. According to Artificial Analysis's Intelligence Index,
16:55
GPT-40 is now the top non-reasoning model
16:57
overtaking Entropics Clause 3.7 sonnet. Now, their
16:59
index combines a range of different coding
17:01
and knowledge benchmarks to come up with
17:03
a blended intelligence score, but digging into
17:05
the coding-specific scores, GPT-40 is now at
17:07
the top of the leaderboard. At the
17:09
same time though, where there's also a
17:11
ton of chatter that actually, in fact,
17:13
Google's Gemini 2.5, isn't even better coding
17:15
standard. During last week's release, the reasoning
17:18
model was clearly a high-performing coding assistant
17:20
based on the benchmarks, but having tested
17:22
it now for a few days, Venture
17:24
Beat broke down why this model could
17:26
be a big step up for programmers.
17:28
They noted that, like Open AI's models,
17:30
Google provides full access to chain of
17:32
thought reasoning. For programmers, that means you
17:34
can follow the model along precisely and
17:36
audit their results, picking up and correcting
17:38
errors along the way. Venture Beat wrote,
17:40
In practical terms, this is a breakthrough
17:42
for trust and steerability. Enterprise users evaluating
17:44
output for critical tasks like reviewing policy
17:47
implications, coding logic, or summarizing complex research
17:49
can now see how the model arrived
17:51
at an answer. That means they can
17:53
validate, correct, or redirect it with more
17:55
confidence. It's a major evolution from the
17:57
black box field that still plagues many
17:59
LLLM outputs. Many coders have also discovered the
18:01
Gemini 2.5 pro is much better at succeeding
18:03
at one-shot tasks. The strong reasoning is
18:05
a possible explanation. The model lays out its
18:07
design and code structure before writing a single
18:09
line of code. Now this could also be
18:12
just an artifact of the observability, allowing
18:14
programmers to see exactly what the model
18:16
is doing throughout. Another benefit that could
18:18
help is geminize 1 million token context
18:20
window. Anthropic is only now preparing to
18:22
release a 500,000 token context window for
18:24
Claude, an upgrade from the 200,000 tokens
18:26
they offer currently. Large Context Windows allow
18:28
bigger code bases to be uploaded, and
18:30
more importantly, to be understood by the
18:32
model while working on coding problems. One
18:35
feature that also feels like it's under-explored
18:37
at the moment, is the new workflows
18:39
that are opened up by the multimodal
18:41
reasoning capabilities present in Gemini 2.5 Pro.
18:43
Like the new version of GPT-4O, Gemini 2.5
18:45
can apply native reasoning to image inputs. This
18:47
is valuable for more than allowing these models
18:50
to easily edit images to easily edit
18:52
images. Developers are starting to realize there.
18:54
Yancey Min and A.I. Figma Plug-in designer
18:56
walked through a tinkering session with GBT-40.
18:58
First, he discovered that the model can
19:00
take interface code and generate an image
19:02
of the interface. Then he found the
19:05
model can modify the code based on
19:07
visual alterations to the interface. In
19:09
this case, Min-brushed over a tab,
19:11
in visual alterations to the interface. In
19:13
this case, Min, brushed over a tab in
19:15
the visual alterations to the interface. In this
19:18
case, Min, Min brushed over barely to a
19:20
gantic support, tweeting, tweeting, tweeting, tweeting,
19:22
tweeting, tweeting, to MCP or not
19:24
to MCP, that's the question. Let
19:27
me know in the comments. As you might imagine,
19:29
the replies were strongly in favor
19:31
of Google supporting the universal protocol
19:33
for agentic tooling. And I actually think
19:35
maybe to take a step back and sum
19:37
this all up, this is a really reflective
19:39
evolution of the frontier model battle. Increasingly,
19:42
the competition is going to be
19:44
less about general performance and more
19:46
about specific use cases. We're evolving
19:48
in such a way that people are actually
19:50
integrating these tools at the core of new
19:52
and existing workflows. and they're picking the best
19:54
models and the best interfaces and the best
19:56
experiences and the best products for whatever it
19:59
is they're trying to do. Coding is clearly
20:01
one of the breakout use cases, which
20:03
is why there's so much competition
20:05
around it. around it. Deep -style searching is also
20:07
clearly going to be a core
20:09
experience, and that's why the XAIX merger is more
20:11
than just about more than just about Elon's
20:13
financial engineering. Ultimately, I expect the
20:15
the course of the next year not
20:17
only continuous upgrades to the underlying models,
20:19
but more and more focus on these
20:21
specific domain areas and specific use cases
20:23
rubber the rubber is actually hitting the
20:25
road when it comes to the business
20:27
applications of the underlying technology. Anyways
20:29
interesting stuff to kick to our week, our
20:31
but that's going to do it for
20:33
today's do Brief. today's AI Daily you listening or
20:35
watching you always, or and until next time, until
20:37
next time, peace.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More