Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
In a world of economic
0:02
uncertainty and workplace transformation, learn
0:05
to lead by example from
0:07
visionary C-sweet executives like Shannon
0:09
Schuyler of PWC and Will
0:11
Pearson of Ihart Media. The
0:14
good teacher explains, the great
0:16
teacher inspires. Don't always leave your
0:18
team to do the work. That's
0:20
been the most important part of
0:22
how to lead by example. Listen
0:24
to Leading by Example, executives making an
0:27
impact on the I Heart Radio app,
0:29
Apple Podcasts, or wherever you get your
0:31
podcasts. Hi, I'm Bob Pittman, Chairman
0:35
Bob Pittman, chairman and CEO of I Heart
0:37
Media. I'm excited to introduce a brand new
0:39
season of my podcast, Math and Magic, stories
0:41
from the frontiers of marketing. I'm having conversations
0:44
with some folks across a wide range of
0:46
industries to hear how they reach the top
0:48
of their fields and the a
1:00
rock star is very fun, but helping people is
1:02
way more fun. And Damien Maldonado, CEO
1:04
of American financing. I figured out the
1:07
formalize have to work hard, then that's
1:09
magic. Join me as we uncover innovations
1:11
and data and analytics, the math, and
1:13
the ever important creative spark, the magic.
1:16
Listen to math and magic on the
1:18
I heart radio app, Apple podcast, or
1:20
wherever you get your podcast. I'm ready
1:22
to fight. Oh, this is fighting words.
1:24
Okay, I'll put the hammer back. Hi,
1:27
I'm George M. Johnson, a best-selling author
1:29
with the second most banned book in
1:31
America. Now more than ever, we need
1:33
to use our voices to fight back.
1:36
Part of the power of black, queer
1:38
creativity is the fact that we got
1:40
us, you know? We are the
1:42
greatest culture makers in world history.
1:45
Listen to fighting words on the
1:47
I-heart radio app, Apple podcast, or
1:49
wherever you get your podcast.
1:55
Hey y'all, it's your girl Cheekies, and I'm
1:57
back with a brand new season of your
1:59
favorite party Cheekies and Chew. I'll be
2:02
sharing even more personal stories with
2:04
you guys and as always you'll
2:06
get my exclusive take on topics
2:08
like love, personal growth, health, family
2:10
ties, and more. And don't forget
2:12
I'll also be dishing out my
2:14
best advice to you on episodes
2:16
of Dear Cheekies. It's going to
2:18
be an exciting year and I
2:20
hope that you can join me.
2:22
Listen to Cheekies and Chell season
2:24
4 on the iHeart Radio app,
2:26
Apple Podcast or wherever you get
2:28
your podcasts. Soul of an angel,
2:30
body of a devil, chosen by
2:32
God and perfected by science, this
2:34
is better offline and I'm your
2:36
host, etc. Now we're working on
2:38
my newsletter last week. Now we're
2:40
working on my newsletter last week,
2:42
I was chatting with my friend,
2:44
friend of the show, Casey Cagawa,
2:46
about generative AI, and we kept
2:48
coming back to one thought. Where's
2:50
the money? Where is it? No,
2:52
really, where is the money? Where
2:54
is the money that this supposedly
2:56
revolutionary world-changing industry is making and
2:58
of course will make in the
3:00
future? And the answer is simple.
3:02
After hours of hours of grinding
3:04
through earnings of grinding through media
3:07
articles of grinding through all sorts
3:09
of things. I just don't believe
3:11
it really exists. It's real, but
3:13
it's small. Generative AI lacks the
3:15
basic unit economics, product market fear,
3:17
or market penetration associated with any
3:19
meaningful software boom, and outside of
3:21
open AI, the industry may be
3:23
pathetically hopelessly small, all while providing
3:25
few meaningful business returns and constantly
3:27
losing money. I'm going to be
3:29
pretty straightforward with everything I say
3:31
in this two-parta, because the numbers
3:33
and the facts in my hypothesis
3:35
are pretty fucking damning of both
3:37
the generative AI industry and its
3:39
associated boosters boosters. You're going to
3:41
get this episode, then there's going
3:43
to be a monologue about something
3:45
else or something related. I really
3:47
haven't got to it yet. And
3:49
then a second part, which I'm
3:51
recording immediately after this one. Little
3:53
behind the curtain there for you.
3:55
Anyway, in reporting this analysis, I've...
3:57
I've done everything I can to
3:59
try and push back against my
4:01
own work, and I've saw evidence
4:03
to counter the things that I've
4:05
seen, like the revenue and the
4:07
business models of these companies. Yet in
4:09
doing so, I've only become more convinced
4:11
of the flimsyness of generative AI in
4:14
the associated industry, and the likelihood of
4:16
this bubble bursting in a way that
4:18
kneecaps take valuations for a prolonged period,
4:20
or worse, hits the major stock market. Now
4:22
I really had originally written a far
4:24
more jocular and outraged and pissy script
4:26
but... While I was writing it, I
4:29
realized I really had to be blunt,
4:31
because what I'm describing is a systemic
4:33
failure. Venture Capital has propped up
4:35
open AI and anthropic, two companies
4:37
that have burned a combined $10.5
4:39
billion in 2024, and that number
4:41
is set to double or more
4:43
in 2025. The tech media has allowed
4:46
Sam Altman to twist them, to
4:48
validate completely fictional ideas, as a
4:50
means of propping up this unprofitable
4:52
environmentally destructive software company. And Big
4:54
Tech has become so disconnected from
4:56
reality that it is incapable of
4:59
seeing how little actual returns there
5:01
are in generative AI. And they're failing,
5:03
by the way. As I'll walk you through
5:05
in these episodes, the generative AI
5:07
industry is very small, with the
5:09
consumer market of the entire American
5:11
generative AI industry outside of chat-GPT
5:13
barely cracking 100 million monthly active
5:15
users, which puts them below a
5:17
lot of free-to-play games that you
5:19
get on your iPhone. Hyperscalers
5:22
have already spent hundreds of billions
5:24
of dollars in capital expenditures for
5:26
an AI industry that has the
5:28
combined monthly active users of a
5:30
free to play mobile game. I
5:32
really must repeat myself. It's insane.
5:34
But unlike most mobile games, generative
5:36
AI doesn't really make any money.
5:38
And for those of you wondering if selling
5:40
access to AI models is the solution, it's
5:43
important to know that Open AI, the market
5:45
leader in generative AI, made less than a
5:47
billion dollars on API calls in 2024. And
5:49
that's when people plug their models in for
5:51
those of you who don't understand, so it's
5:54
the difference between you to load up the
5:56
chat GBT app or someone has an AI,
5:58
a generative AI like chat. GPT Ryan
12:53
Reynolds here for Mint Mobile. I don't
12:55
know if you knew this, but anyone
12:57
can get the same premium wireless for
12:59
$15 a month planned that I've been
13:01
enjoying. It's not just for celebrities, so
13:03
do like I did, and have one
13:05
of your assistance assistance to switch you
13:07
to Mint Mobile today. I'm told it's
13:09
super easy to do at mintmobile.com slash,
13:12
switch. Up front payment of $45 for
13:14
three-month plan equivalent to $15 per month
13:16
required. Intro rate for three months only.
13:18
Then full price plan options available. Taxes
13:20
and fees extra. Feeful terms at mintmobile.com.
13:26
Okay, we're back. Rory, the
13:28
Menesas incident might be the
13:30
most famous UFO case in
13:32
Spanish history, but it's far
13:34
from the only case. In
13:36
1965, an utterly insane tale
13:38
emerged from a military site
13:40
in Bandahos. I don't know
13:43
how to say that Bandahos.
13:45
But, sorry, Bados. Badahos. That
13:47
sounds like a German man
13:49
saying that something's bad-ass. It's
13:51
like, oh, that's bad. Let's
16:24
really get down into the nitty gritty
16:26
of these numbers. So as discussed previously,
16:29
according to the reporting by the information,
16:31
Open AI's revenue was likely somewhere in
16:33
the region of $4 billion in 2024.
16:35
The burn rate, according to the information,
16:37
was $5 billion after revenue in 2024,
16:40
excluding stock-based compensation, which Open AI, like
16:42
other startups, uses as a means of
16:44
compensation on top of cash. Nevertheless, the
16:46
more it gives away, the less it
16:48
has for capital raises, and these are
16:51
technically costs, though they're not real money.
16:53
unless there's a liquidity event, but you
16:55
don't need to worry about that. To
16:57
put this in blunt terms, based on
16:59
reporting by the information, and I'm repeating
17:02
myself here, but I really need you
17:04
to remember this, running open AI costs
17:06
$9 billion in 2024. The cost of
17:08
the computer To train models, to train
17:10
models alone, $3 billion, obliterates the entirety
17:13
of their subscription revenue, which is about
17:15
$3 billion, by the way. And the
17:17
compute from running models, $2 billion, takes
17:19
the rest and then some. They actually
17:21
end up losing an extra billion on
17:24
top of that. Sam Altman's net worth
17:26
is a billion dollars, by the way.
17:28
Casey Gagawa has now used this as
17:30
the Altman index, so it's like, you've
17:32
lost one Sam Altman, that's a billion
17:35
dollars. But just to be clear, it
17:37
doesn't just cost more to run Open
17:39
AI than they make. It costs them
17:41
a billion dollars more than the entirety
17:43
of their revenue to run the software
17:46
they sell before any other costs. Why
17:48
are we not more concerned about this
17:50
company? Now something else to note is
17:52
that Open AI also spends an alarming
17:54
amount of money on salaries. Over 700
17:57
million dollars in 2024 before you consider
17:59
that compensation. from stock. A number that
18:01
will also have to increase because Open
18:03
AI is growing, which means hiring as
18:06
many people as possible, and they're paying
18:08
through the nose for them. But let's
18:10
talk about how Open AI makes money.
18:12
Open AI sells access to its models
18:14
via its API and selling premium subscriptions
18:17
to ChatGPT. The majority of its revenue
18:19
over 70% comes from subscriptions to premium
18:21
versions of ChatGPT. The information also reported
18:23
that Open AI now has 15.5 million
18:25
paying customers, though it's unclear what level
18:28
of the service they're paying for, or
18:30
how sticky these customers are as in
18:32
how likely they are to stick around,
18:34
or the cost of acquiring these customers,
18:36
or really any other metric to tell
18:39
them how valuable these customers are to
18:41
the bottom line. Nevertheless, Open AI loses
18:43
money on every single paying customer, just
18:45
like its free users. Increasing paid subscribers
18:47
to a... Open AI services somehow increases
18:50
Open AI's burn rate. This is not
18:52
a real company. Now the New York
18:54
Times reports that Open AI projects it
18:56
will make $11.6 billion in 2025 and
18:58
assuming that Open AI burns at the
19:01
same rate it did in 2024, spending
19:03
$2.25 to make $1, Open AI is
19:05
on course the burn over $26 billion
19:07
in 2025 for a loss of $14.4
19:09
billion. Who knows what their actual cost
19:12
will be? Now you've probably heard about
19:14
soft bank coming in, soft bank's going
19:16
to feed the money and soft bank
19:18
said they're going to spend money on
19:20
this, that and the other. That round
19:23
has not closed yet. Masayoshi's son, a
19:25
complete fucking idiot who's lost thirty odd
19:27
billion dollars to soft bank, the... Japanese
19:29
mega conglomerate. He's dedicating billions of dollars
19:31
of revenue to buying open AI services.
19:34
Unless this is a straight up trade
19:36
where he's just sending money before the
19:38
services come in, I don't know if
19:40
it happens. And I'm going to get
19:42
into things like agents later, but the
19:45
information reported that open AI expects to
19:47
make $3 billion in revenue from agents.
19:49
By the end of this episode, you're
19:51
going to realize how fucking stupid that
19:53
sounds. We'll get there later. It's also
19:56
important to note that open AI's cost
19:58
a partially subsidizedized by its relation by
20:00
its relation. with Microsoft, which provides cloud
20:02
compute credits for its azure cloud service.
20:04
Not super technical, it's just when they
20:07
host people's software and files and such,
20:09
and the compute to run these models.
20:11
And they also offer this a steep,
20:13
steep discount to Open AI. Or put
20:15
another way, it's like Open AI got
20:18
paid with air miles, but the airline
20:20
lowered the redemption cost of booking a
20:22
flight with those air miles, allowing it
20:24
to take more flights than any other
20:26
person with the equivalent amount of points.
20:29
Until recently, OpenAI exclusively used Microsoft as
20:31
your services to train, host, and run
20:33
its models. But recent changes to its
20:35
deal means that OpenAI is now working
20:37
with Oracle to build up further data
20:40
centers to train, host, and run its
20:42
models. It's unclear whether this partnership will
20:44
work in the same way as the
20:46
Microsoft deal, with OpenAI provided credits and
20:48
discounts like before. If not, OpenAI's operating
20:51
costs will only go up. Per previous
20:53
reporting from the information, open AI pays
20:55
just over 25% of the cost of
20:57
a Zios GPU compute as part of
20:59
their deal with Microsoft. And that's about
21:02
a dollar 30 per GPU per hour
21:04
versus the regular a Zio cost of
21:06
$3.40 to $4 an hour. I know
21:08
that this sounds really technical, but in
21:10
very short, they're getting a sweet deal
21:13
from Microsoft and if anything happens then
21:15
they're completely fucked anyway. They're burning billions
21:17
of dollars. It's insane. Let's talk about
21:19
user numbers because Open AI has quite
21:21
a few. They recently announced that they
21:24
have 400 million weekly active users. Now
21:26
Weekly Active Users is a wanky number
21:28
and a very strange one for a
21:30
company like this. Open AI may pretend
21:32
to be a consumer company, but the
21:35
majority of their revenue comes from monthly
21:37
subscriptions, making them kind of a cloud
21:39
software company. Classically, cloud software companies report
21:41
monthly active users. That way you can,
21:43
I don't know. Compare one number, which
21:46
is the amount of active users you
21:48
have, with the paid users you have,
21:50
and then say, oh, that's a good
21:52
business. That's a good business right there,
21:54
man. Guess what? Open AI isn't giving
21:57
them monthly active users. Don't worry. I
21:59
might have estimated it. When I asked
22:01
Open AI to define what a weekly
22:03
active user was, it responded by... I'm
22:05
pointing me to a tweet by Chief
22:08
Operating Officer Brad Lightcap that said ChatGPT
22:10
recently crossed 400 million weekly active users.
22:12
We feel very fortunate to serve 5%
22:14
of the world every week. What a
22:17
fucking liar. It's extremely questionable that Open
22:19
AI refuses to define this core metric,
22:21
by the way. And without a definition
22:23
in my opinion, there is no way
22:25
to assume anything other than Open AI
22:28
is actively gaming its numbers. Now there's
22:30
likely two reasons they focus on weekly
22:32
active users. One, as described, these numbers
22:34
are easy to game. You can choose
22:36
any seven day period. And also the
22:39
majority of OpenAIs revenue comes from paid
22:41
subscriptions to chatGPT. And that latter point
22:43
is crucial because it suggests OpenAIs not
22:45
doing anywhere near as well as it
22:47
seems, based on the very basic metrics
22:50
used to measure the success of a
22:52
software product. The information reported on January
22:54
31st, OpenAi, like I mentioned, had 15.5
22:56
million monthly paying subscribers, and they added
22:58
in this piece that this was less
23:01
than a 5% conversion rate of OpenAIs'
23:03
weekly active users. A statement that's kind
23:05
of like dividing the number 52 by
23:07
the letter A. This is not an
23:09
honest or reasonable way to evaluate the
23:12
success of ChatGPT's still unprofitable software business,
23:14
because the actual metric, like I mentioned,
23:16
would have been to divide paying subscribers
23:18
by monthly active users, or the other
23:20
way around, I guess, a number that
23:23
would be considerably higher than 400 million.
23:25
And the reason they don't want you
23:27
to do that, by the way, is
23:29
because you would divide them five percent,
23:31
by the way. And... There's definitely lower.
23:34
But don't worry, I'm a sneaky little
23:36
shit, so I went and looked some
23:38
stuff up and I talked to some
23:40
people. Based on data from the market
23:42
intelligence firm Sensor Tower, Open AI's ChatGPT
23:45
app on Android and iOS is estimated
23:47
to have more than 339 million monthly
23:49
active users. And based on traffic data
23:51
for Market Intelligence Company, similar web, chatgpt.com
23:53
had 246 million unique monthly visitors, and
23:56
these were in January 2025. There's likely
23:58
some crossover with people using both the
24:00
mobile and web interfaces, though how big
24:02
that group beers is... kind of hard
24:04
to tell and remains uncertain. Though every
24:07
single person that visits chatgpt.com might not
24:09
become a user, it's safe to assume
24:11
that chatGPT's monthly active users are somewhere
24:13
in the region of 500 to 600
24:15
million. That's good, right? It's actual users
24:18
are higher than officially claimed. Right? That's
24:20
good, right? It's actual users are higher
24:22
than officially claimed. Right? But either free
24:24
ones definitely are. It would also suggest
24:26
that the real conversion rate is somewhere
24:29
in the neighborhood... but of 2.583% from
24:31
freed paid users on chatGPT, which is
24:33
astonishingly bad. And it's a fact that's
24:35
made worse by the fact that every
24:37
single user regardless of whether they pay
24:40
or not loses the money. Either way.
24:42
And while it's quite common for Silicon
24:44
Valley companies to play fast and loose
24:46
with metrics, this particular one is... Well,
24:48
it's deeply concerning, and I hypothesize that
24:51
Open AI is choosing to go with
24:53
weekly versus monthly active users in an
24:55
intentional attempt to avoid people calculating the
24:57
conversion rate of its subscription products. As
24:59
I will continue to repeat, these subscription
25:02
products lose the company money every single
25:04
time. Now let's talk product strategy, shall
25:06
we? Because I don't think Open AI
25:08
really has one. Open AI makes most
25:10
of its money from subscriptions, approximately $3
25:13
billion in 2024, and the rest on
25:15
API access to its models, approximately a
25:17
billion. As a result, Open AI has
25:19
chosen to monetize ChatGBT and its associated
25:21
products in an all-you-can-eat software subscription model,
25:24
or otherwise make money by other people
25:26
productizing it. And just to be clear,
25:28
in both of these scenarios, Open AI
25:30
loses money on every transaction. OpenAIs products
25:32
are not fundamentally differentiated or interesting enough
25:35
to be sold separately. It has failed,
25:37
as with the rest of the generative
25:39
AI industry, to meaningfully productize its models
25:41
due to the massive training and operational
25:43
costs and a lack of any meaningful
25:46
killer app use cases for large language
25:48
models. The only product that OpenAI has
25:50
succeeded in scaling to the mass market
25:52
is the free version of chatGBT, which
25:54
loses the company money with every single
25:57
prompt and output. This scale isn't a
25:59
result of any kind of... product market
26:01
fit by the way. It's entirely media
26:03
driven with reporters making chat GPT synonymous
26:05
with artificial intelligence, a thing they regularly
26:08
write about without thinking. As a result
26:10
I do not believe that the generative AI industry
26:12
is real. It's not a real industry which
26:14
I will define as one with multiple
26:17
competitive companies with sustainable or otherwise growing
26:19
revenue streams and meaningful products with actual
26:21
market penetration. And I feel this way
26:24
because it... This market is entirely subsidized
26:26
by a combination of venture capital and
26:28
hyperscaler cloud credits, and, well, real money,
26:31
I guess. ChatGPT is popular because it's
26:33
the only well-known product, one that's mentioned
26:35
in basically every article in AI. If
26:38
this were a real industry, other competitors
26:40
would also be mentioned all the time.
26:42
They would have similar scale, especially
26:44
those run by hyperscalers, but as
26:46
I'll get to later, data suggests
26:48
that open AI is the only
26:51
company with any significant user base
26:53
in the entire generative AI industry,
26:55
and it's still wildly unprofitable and
26:57
unsustainable. Open AI's models have also
26:59
been entirely commoditized. Even its reasoning
27:01
model, O1, has been commoditized by
27:04
both Deep Sikhs R1 model and
27:06
perplexities agonizingly named R1 1776 model,
27:09
both of which have similar outcomes
27:11
at a much discounted price to
27:13
OpenAIs O1, though it's unclear and
27:16
unlikely in my opinion that these
27:18
models are profitable anyway. Open AI
27:20
as a company, well they just pissed
27:22
poor at product. It's been two years
27:24
and chat-GPT mostly does the same thing,
27:26
still costs more to run than it
27:29
makes, and ultimately does the same thing
27:31
as every other LLLM chatbot from every
27:33
other company. The fact that nobody has
27:35
managed to make a mass market product
27:37
by connecting open AI's models also suggests
27:39
that the use cases just aren't there.
27:41
Furthermore, the fact that API access is
27:43
such a small... part of its revenue
27:45
suggests that the market for actually implementing
27:47
large language models is relatively small. If
27:49
the biggest player in the space only
27:51
made a billion dollars in selling access
27:53
to its models unprofitably, and that amount
27:56
is the minority of its revenue, there might
27:58
not actually be a real industry here. And
28:00
I must be clear, if there was
28:03
user demand, this would be where it
28:05
was in the APIs. It would be
28:07
doing gangbusters, because people wouldn't be able
28:09
to help themselves, they'd just be all
28:12
over this generative AI share. But they're
28:14
not. Hey
28:25
kids, it's me, Kevin Smith! And it's
28:27
me, Harley Quinn Smith. That's my daughter,
28:29
man, who my wife has always said
28:31
is just a beardless, deakless version of
28:33
me. And that's the name of our
28:35
podcast. Beardless, deakless me. I'm the old
28:37
one. I'm the young one. And every
28:40
week we try to make each other,
28:42
laugh. And every week we try to
28:44
make each other, laugh, laugh with me.
28:46
I'm the old one. I'm the young
28:48
one. And every week, and every week,
28:50
and every week, we try to make,
28:53
we try to make, we try to
28:55
make, we try to make, we try,
28:57
we try, we try, we try, we
28:59
try, we try. We're, we try. We're,
29:01
we're, we're, we're, we're, we're, we're, we're,
29:03
we're, we're, we're, we're, we're, we're, we're,
29:05
we're, we're, I You get your podcast.
29:08
Hey y'all, it's your girl Cheekies and
29:10
I'm back with a brand new season
29:12
of your favorite podcast Cheekies and Chew.
29:14
I'll be sharing even more personal stories
29:16
with you guys. And I know a
29:18
lot of people are gonna attack me.
29:21
Why are you gonna go visit your
29:23
dad? Your mom wouldn't be okay with
29:25
it. I'm gonna tell you guys right
29:27
now. I know my mom had a
29:29
very forgiving heart. That is my story
29:31
on plastic surgery. This is my truth.
29:33
I think the last time I cried
29:36
like that was when I lost my
29:38
mom, like that, like yelling. I was
29:40
like, no. I was like, oh, and
29:42
I thought, what did I do wrong?
29:44
And as always, you'll get my exclusive
29:46
take on topics like love, personal growth,
29:49
health, family ties, and more. And don't
29:51
forget, I'll also be dishing out my
29:53
best advice to you on episodes of
29:55
dear cheekies. So my fiance and I
29:57
have been together for 10 years in
29:59
the first two years of being together
30:02
I find out he is cheating on
30:04
me not only with the women, but
30:06
also with men. What should I do?
30:08
Okay, where do I start? That's not
30:10
love. He doesn't love you enough because
30:12
if he loved to you, he'd be
30:14
faithful. It's going to be an exciting
30:17
year and I hope that you can
30:19
join me. Listen to Cheekies and Chill,
30:21
season four as part of the Michael
30:23
Dura podcast network available on the iHeart
30:25
Radio app, Apple Podcasts, or wherever you
30:27
get your podcasts. Welcome to Paud of
30:30
Rebellion, our new Star Wars Rebels rewatch
30:32
podcast. I'm Vanessa Marshall. Hi, I'm Tia
30:34
Sercar. I'm Taylor Gray. And I'm John
30:36
Lee Brody. But you may also know
30:38
us as Harrison Dula's Specter too. To
30:40
being ran, Specter 5. And Ezra Bridger,
30:42
Specter, Specter 6 from Star Wars Rebels.
30:45
Wait, I wasn't on Star Wars Rebels,
30:47
am I in the right place? Absolutely.
30:49
guests like Steve Bloom voices Zaborellio, Specter
30:51
4 or Dante Bosco voice of Jai
30:53
Kel and many others. Sometimes we'll even
30:55
have a lively debate. And we'll have
30:58
plenty of other fun surprises in tribute
31:00
to. Oh, and me? Well, I'm the
31:02
lucky ghost crew, Stowaway, who gets to
31:04
help moderate and guide the discussion each
31:06
week. Kind of like how Canaan guided
31:08
Ezra in the ways of the force.
31:10
You see what I did there? Listen
31:18
to Potter Rebellion on the I-Hart radio
31:20
app, Apple podcast, or wherever you get
31:23
your podcast. My name is Brendan Patrick
31:25
Hughes, host of divine intervention. This is
31:27
a story about radical nuns in combat
31:29
boots and wild-haired priests, trading blows with
31:31
Jay Edgar Hoover in a hell-bent effort.
31:34
to sabotage a war. Jayagahouhova was furious.
31:36
Somebody violated the FBI and he wanted
31:38
to bring the Catholic left to its
31:40
knees. The FBI went around to all
31:42
their neighbors and said to them, do
31:45
you think these people are good Americans?
31:47
It's got heist tragedy, a trial of
31:49
the century, and the god-damnest love story
31:51
you've ever heard. I picked up the
31:53
phone and my thought was this is
31:56
the most important phone call I'll ever
31:58
make in my life. I couldn't believe
32:00
it. I mean, Brendan, it was divine
32:02
intervention. Listen to divine intervention on the
32:05
i-heart radio app, Apple Podcasts, or wherever
32:07
you get your podcasts. Some might argue
32:09
that Open AI has a new series
32:11
of products that could open up new
32:13
revenue streams, such as operator, it's agent
32:16
product, and deep research their research product.
32:18
And I'm so fucking tired of hearing
32:20
about agents. Whenever you hear someone say
32:22
agent, really look at what they're saying,
32:24
because they want you to think autonomous
32:27
bit of software. What they're actually talking
32:29
about is either a chat bar, or,
32:31
well, the dog shit, the Open AI
32:33
and Anthropic and Anthropic have warmed up.
32:35
which will get to shortly. But first,
32:38
let's talk costs. Both of these products
32:40
are very compute-intensive. operator uses open AI's
32:42
computer-using agent, the CUA, which combines open
32:44
AI's models with virtual machines that take
32:47
distinct actions on web pages in this
32:49
extremely unreliable and costly way, where they
32:51
take screenshots as they scroll down. And
32:53
it just doesn't fucking work. I had
32:55
a whole thing about Casey Newton writing
32:58
about this. It's just so bad. Like,
33:00
Casey Newton, please go outside challenge. Just...
33:02
Just go outside Casey, stop with the
33:04
compute, you don't know what you talked
33:06
about. But failures with these, and remember
33:09
these models, pretty much all of them,
33:11
are inconsistent, and the more in-depth the
33:13
thing you ask them to do, the
33:15
more likely there's going to be a
33:17
problem with it. So think about it
33:20
like this, failures from something you've asked
33:22
them to do will either increase the
33:24
amount of attempts you make to get
33:26
the thing you want, or make users
33:29
not use it at all. Not a
33:31
really great idea. Now let's talk deep
33:33
research. They use a version of Open
33:35
AI's O3 reasoning model, which is a
33:37
model so expensive because it spends more
33:40
time to generate a response based on
33:42
the model reconsidering and evaluating steps as
33:44
it goes. The Open AI will no
33:46
longer launch O3 as a standalone model.
33:48
And that's really a good thing. when
33:51
you see a company be like, yeah,
33:53
you can't touch it, it's too expensive.
33:55
In short, these products are extremely expensive
33:57
to run, and this means that any
33:59
time their outputs aren't perfect, which is
34:02
to say a lot of the time,
34:04
there's a high likelihood that they'll be
34:06
triggered again, which will in turn spend
34:08
more compute. But let's talk about the
34:11
product market fit, because this is really
34:13
important. To use operator or deep research,
34:15
currently requires you to pay $200 a
34:17
month subscription. Sam Altman recently revealed still
34:19
loses the money because people are using
34:22
it more than expected and that is
34:24
a quote. Furthermore, even on chatGPT Pro,
34:26
deep research is currently limited to 100
34:28
queries per month, adding that it is
34:30
very compute intensive and slow. Though Altman
34:33
has promised the chatGPT plus and free
34:35
users will eventually get access to a
34:37
few deep research queries a month, Well,
34:39
that's not good for their cashburn. That's
34:42
actually bad for the cashburn. I'm not
34:44
sure it's going to make them... I'm
34:46
not really sure how that turns into
34:48
money anywhere. But let's talk about operator.
34:50
operator is this agent product where you're
34:53
meant to be able to be like,
34:55
hey, look, go and look, something up
34:57
for me, and it only works like
34:59
30% at the time, and it's just
35:01
very bad. And as I covered in
35:04
my newsletter a few weeks ago, this
35:06
product, and it claims to control your
35:08
computer, prime time, and I don't think
35:10
it has a market. The way they're
35:12
selling this is that you'll be able
35:15
to make it do distinct tasks on
35:17
the computer, but even Casey Newton and
35:19
his article was like, yeah, it only
35:21
works sometimes, and the things it works
35:24
on are like searching trip advisor. Imagine
35:26
this, if you will. What if for
35:28
the cost of boiling a lake and
35:30
throwing an entire zoo into the lake
35:32
and boiling the animals inside it, you
35:35
could sometimes be able to search trip
35:37
advisor in two minutes versus ten, like
35:39
five seconds. The future is so cool
35:41
I love living in it. But let's
35:43
talk about deep research for a second.
35:46
It's already been commoditized. Proplexed AI and
35:48
XAI have launched their own versions immediately,
35:50
and deep research itself is... not a
35:52
good product. As I covered in my
35:54
newsletter last week, the quality of the
35:57
writing that you receive from Deep Research
35:59
is really piss poor. And it's rivaled
36:01
only by the appalling quality of its
36:03
citations, which include forum posts and search
36:06
engine optimized content instead of actual news
36:08
sources. These reports are neither deep nor
36:10
well researched, and cost-open AI a great
36:12
deal of money to deliver. And just
36:14
to give you a primal deep research
36:17
is meant to be, you're meant to
36:19
be able to type something in, and
36:21
it does like a 3,000 word word
36:23
report. I really, if you should go
36:25
and look up, go to my newsletter,
36:28
Where's Your Ed.at, it's the, it's the,
36:30
it's the, it's the piece before the
36:32
ones that's going to come out, when
36:34
these episodes come out. I forget the
36:36
name exactly. You need to go and
36:39
look at how shit deep researchers, it's
36:41
come out. You need to go and
36:43
look at how shit deep researches. It's
36:45
incredible that this money losing juggernaut piece
36:48
of shit, thinks that this is a
36:50
real product, and don't work very well.
36:52
Let's talk about how they make money.
36:54
Let's talk about how they make money.
36:56
How they make money. How they make
36:59
money. Or don't. Both operate in deep
37:01
research, like I told you, currently require
37:03
you to pay $200 a month to
37:05
a company that loses money all the
37:07
time, that also loses money on the
37:10
$200 a month. Neither product is sold
37:12
in its own, and while they may
37:14
drive revenue to the ChatGPT Pro product,
37:16
as said before, said product loses open
37:18
AI money. These products are also compute
37:21
intensive and have questionable outputs, making each
37:23
prompt very likely to create another follow-up
37:25
prompt. And the problem is you're asking
37:27
something that doesn't know anything that probabilistically
37:30
generates answers to research something. So as
37:32
a result, the research isn't going to
37:34
be any good. It's not like it's
37:36
going to research it and go, hey,
37:38
what would be a good source? It's
37:41
going to say, what matches the patterns?
37:43
What matches all the patterns that I've
37:45
been trained on? Ah, that's fine. Who
37:47
gives a ship? It's like having the
37:49
world's worst intern except the intern gets
37:52
a concussion every 10 minutes. But in
37:54
summary, both operator and deep research are
37:56
expensive products to maintain, assault through an
37:58
expensive $200 a month subscription that, like
38:00
every other service provided by Open AI,
38:03
loses the company money and due to
38:05
the low quality... their outputs and actions
38:07
are likely to increase user engagement to
38:09
try and get the desired output, incurring
38:12
further costs for Open AI. Well, you
38:14
know, like Ed, Ed, you say, Ed,
38:16
you're just being, you're just being a
38:18
hater, right? Just being a hater, things
38:20
don't look great today, but this early
38:23
days, it isn't early days, but still,
38:25
Ed, it's early days, things don't look
38:27
great today. What about the future prospects
38:29
for Open AI? Prospects, prospects for Open
38:31
A. Things can't be that bad prospects
38:34
for Open AI. A week or two
38:36
ago, Sam Altman announced the updated roadmap
38:38
for GPT 4.5 and GPT5. Now these
38:40
are their next generation models that have
38:42
been hyping up for the best part
38:45
of a year. Except GPT 4.5 didn't
38:47
exist before. It was always GPT 5.5.
38:49
Now GPT 4.5 didn't exist before. It
38:51
was always GPT 5.5. Now GPT 4.5
38:54
will be open AI's last chain of
38:56
thought model of its reasoning. Yes. GPT5
38:58
will be, and I quote Sam Altman,
39:00
a system that integrates a lot of
39:02
open AI technology, including O3. What the
39:05
fuck are you talking about? Orman also
39:07
vaguely suggests that paid subscribers will be
39:09
able to run GPT5 at a higher
39:11
level of intelligence, which likely refers to
39:13
being able to ask the models to
39:16
spend more time computing an answer. He
39:18
also suggests that the GPT5, and I
39:20
quote, will incorporate voice, canvas, search, deeper
39:22
research, deeper search, and more. Fucking bedbath
39:24
beyond, motherfucker. Come My man, your company
39:27
spent $9 billion to lose $5 billion.
39:29
Why is anyone taking this seriously? This
39:31
is ridiculous. But both of these statements,
39:33
all of these statements honestly, vary from
39:36
vague to meaningless. But I hypothesize the
39:38
following. GPT-4.5 will be an upgraded version
39:40
of GPT-40. Open AIs Foundation model you're
39:42
probably using right now. And it's code
39:44
named Orion. Eryan. could literally be anything.
39:47
But one thing that Altman mentioned in
39:49
the tweet is that open AI's model
39:51
offerings have got too complicated. They'd be
39:53
doing a... with the ability to pick
39:55
what model you used. Gossying this up
39:58
and he's claiming it's unified intelligence. This
40:00
fucking guy. If I said this shit
40:02
to a doctor, they'd institutionalize me. They'd
40:04
say you sound like a lunatic. But
40:06
anyway, as a result of doing away
40:09
with the model picker, which is literally
40:11
the thing you click and you choose
40:13
GPT-400 or GPT-400-many or like the I-1
40:15
reasoning things, I think they're going to
40:18
attempt to attempt to moderate to moderate
40:20
cost by picking. If there's one thing
40:22
I've noticed with Open AI, they're not
40:24
very good at automating anything. So I
40:26
expect this to be bad. And I
40:29
believe that Altman announcing these things is
40:31
a very bad omen for Open AI.
40:33
Because Orion has been in the works
40:35
for more than 20 months and was
40:37
meant to be released at the end
40:40
of 2024, but it was delayed due
40:42
to multiple training runs that resulted in,
40:44
to quote the Wall Street Journal, software
40:46
that fell short of the results researchers
40:48
were hoping for. As on the side,
40:51
the Wall Street Journal refers to Orion
40:53
as GPT-5, this was from several months
40:55
back, but based on the copy in
40:57
Altman's comments, I believe Orion refers to
41:00
a foundation model. Open AI, which is
41:02
one to replace the core GPT, one
41:04
that powers ChatGPT. Open AI now appears
41:06
to be calling a hodgepodge of different
41:08
mediocre models, something called GPT-5. It's almost
41:11
as if Altman's making this up as
41:13
he goes along. Now the journal further
41:15
adds that as of December Orion performed
41:17
better than Open AI's current offerings but
41:19
hadn't advanced enough to justify the enormous
41:22
costs of keeping the new model running.
41:24
With each six-month long training run, no
41:26
matter how well it works, costing over
41:28
$500 million. Open AI also, like every
41:31
generative AI companies, running out of high-quality
41:33
training data, the data necessary to make
41:35
its models smarter based on the benchmarks
41:37
specifically made up to make LLLM seem
41:39
smart, And I should note that being
41:42
smarter means completing tests not new functionality
41:44
or new things that it can do.
41:46
Sam Orpman deputizing Orion from GPT5 to
41:48
GPT4.5 suggests that Open AI has hit
41:50
a wall with making its new model,
41:53
requiring him to lower expectations. for a
41:55
model open AI Japan president, Tagao Nagasaki
41:57
had suggested would, and I quote, aim
41:59
for 100 times more computational volume than
42:01
GPT-4, which some took to mean 100
42:04
times more powerful when it actually means
42:06
it will take way more computation to
42:08
train or run inference on it. I
42:10
guess he was right. Now, if Sam
42:13
Altman, who is a man who loves
42:15
to lie, is trying to reduce expectations
42:17
for a product, I think we should
42:19
all be really, really, really worried. Now
42:21
large language models which are trained by
42:24
feeding the massive amounts of training data
42:26
and then reinforcing their understanding through further
42:28
training runs are hitting the point of
42:30
diminishing returns. In simple terms, the quote,
42:32
friend of the show Max Zef of
42:35
Tech Crunch, everyone now seems to be
42:37
admitting you can't just use more compute
42:39
and more training data with pre-training large
42:41
language models and expect them to turn
42:43
into some all-knowing digital god. Max is
42:46
a fucking legend. capture the entire tech
42:48
media, has been its relationship with Microsoft,
42:50
because access to large amounts of compute
42:52
and capital allowed it to corner the
42:55
market for making the biggest, most hugest
42:57
large language model. Now that it's pretty
42:59
obvious this isn't going to keep working,
43:01
Open AI is scrambling, especially now deep
43:03
seekers commoditized reasoning models, and prove that
43:06
you can build LLLMs without the latest
43:08
GPUs. It's unclear what the functionality of
43:10
GPT4 or GPT5 or GPT5 or GPT5
43:12
will be. Does the market care about
43:14
an even more powerful large language model
43:17
if said power doesn't do anything new
43:19
or lead to a new product? Does
43:21
the market care if unified intelligence just
43:23
means stapling together various models to produce
43:25
more outputs that kind of look and
43:28
sound the same? As it stands, Open
43:30
AI has effectively no moat beyond its
43:32
industrial capacity to train large language models
43:34
and its presence in the media. Open
43:37
AI can have as many users as
43:39
it wants, but it doesn't matter because
43:41
it loses billions of dollars and appears
43:43
to be continuing to follow the money
43:45
losing large language model paradigm, guaranteeing or
43:48
lose billions of dollars more if they're
43:50
allowed to. This is the biggest player
43:52
in the generative AI industry, both the
43:54
market leader and the recipient of almost
43:56
every... every single dollar of revenue that
43:59
this industry generates. They have received more
44:01
funding and more attention than any startup
44:03
in the last few years, and as
44:05
a result, their abject failure to become
44:07
a sustainable company with products that truly
44:10
matter is a terrible sign for Silicon
44:12
Valley and an embarrassment to the tech
44:14
media. In the next episode, I'm going
44:16
to be honest, I have far darker
44:19
news. Based on my reporting, I believe
44:21
that the generative AI industry outside of
44:23
open AI is incredibly small, with little
44:25
to no consumer adoption, and pathetic amounts
44:27
of revenue compared to the hundreds of
44:30
billions of dollars sunk into supporting it.
44:32
This is an entire hype cycle fueled
44:34
by venture capital and big tech hubris,
44:36
with little real adoption and little hope
44:38
for a turnaround. Enjoy tomorrow's monologue, and
44:41
then the final part on Friday. Thank
44:50
you for listening to Better Offline.
44:52
The editor and composer of the
44:54
Better Offline theme song is Matosowski.
44:57
You can check out more of
44:59
his music and audio projects at
45:01
matosowski.com. m-a-t-t-o-s-o-s-w-s-k-i.com. You can email me
45:03
at easy at betteroffline.com or visit
45:05
betteroffline.com to find more podcast links
45:08
and of course my newsletter. I
45:10
also really recommend you go to
45:12
chat.where's your ed.at to visit the
45:14
discord and go to r slash
45:17
betteroffline to check out our credit.
45:19
Thank you so much for listening.
45:21
Better offline is a production of
45:23
Coolzone media. For more from Coolzone
45:25
media, visit our website Coolzone media.com
45:28
or check us out on the
45:30
iHart radio app, Apple podcast or
45:32
wherever you get your get your
45:34
podcast. In
45:56
a world of economic uncertainty
45:58
and workplace transformation, to lead
46:00
by example from visionary C-sweet
46:02
executives like Shannon Skyler of
46:04
PWC and Will Pearson of
46:06
I-Hart Media. The good teacher
46:08
explains the great teacher inspires.
46:10
Don't always leave your team
46:12
to do the work. That's
46:14
been the most important part
46:16
of how to lead by
46:18
example. Listen to Leading by
46:21
example. Executives making an impact
46:23
on the I-Hart Radio app.
46:25
Apple podcasts or wherever you
46:27
get your podcasts. I
46:29
am Bob Pittman, Chairman and CEO of
46:31
I Heart Media. I'm excited to introduce
46:33
a brand new season of my podcast,
46:36
Math and Magic, Stories from the Frontiers
46:38
of Marketing. I'm having conversations with some
46:40
folks across a wide range of industries
46:43
to hear how they reach the top
46:45
of their fields and the lessons they
46:47
learned along the way that everyone can
46:49
use. I'll be joined by innovative leaders
46:52
like Chairman and CEO of Elf Duty
46:54
Tarang Amin. Legendary singer-songwriter and philanthropist, Jewell.
46:56
Being a rock star is very fun,
46:58
but helping people is way more fun.
47:01
And Damien Maldonado, CEO of American financing.
47:03
I figured out the formalize has to
47:05
work hard, then that's magic. Join me
47:07
as we uncover innovations and data and
47:10
data and analytics analytics and the ever-important
47:12
creative spark, the magic. Listen to Math
47:14
and Magic on the iHeart Radio app,
47:16
Apple Podcast, or wherever you get your
47:19
podcast. I'm ready to fight. Oh, this
47:21
is fighting words. Okay, I'll put the
47:23
hammer back. Hi, I'm George M. Johnson,
47:25
a best-selling author with the second most
47:28
banned book in America. Now more than
47:30
ever, we need to use our voices
47:32
to fight back. Part of the power
47:34
of black queer creativity is the fact
47:37
that we got us, you know? We
47:39
are the greatest culture makers in world
47:41
history. Listen to Fighting Words on the
47:44
I-Hart Radio app, Apple podcast, or wherever
47:46
you get your podcast. Hey y'all, it's
47:48
your girl Cheekies and I'm back with
47:50
a brand new season of your favorite
47:53
podcast Cheekies and Chew. I'll be sharing
47:55
even more personal stories with you guys.
47:57
and as always, you'll get my exclusive
47:59
take on topics like love, personal growth,
48:02
health, family ties, and more. And don't
48:04
forget, I'll also be dishing out my
48:06
best advice to you on episodes of
48:08
Dear Cheekies. It's going to be an
48:11
exciting year and I hope that you
48:13
can join me. Listen to Cheekies and
48:15
Chill, season 4 on the I Heart
48:17
Radio app, Apple Podcast, or wherever you
48:20
get your podcasts.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More