Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
This is the everyday
0:02
AI show the everyday
0:04
podcast where we simplify
0:06
AI and bring its
0:08
power to your fingertips
0:10
Listen daily for practical
0:12
advice to boost your
0:14
career business and everyday
0:16
life There's a new
0:18
most powerful AI model
0:20
in the world
0:23
Yeah, sometimes I
0:25
feel like DJ Khaled
0:27
because each week it's
0:29
like Another one. Another
0:31
one. Another most powerful AI
0:33
model in the world.
0:36
Y 'all, the last couple
0:38
of weeks, couple of months,
0:40
it has been a
0:42
back and forth, I think
0:44
specifically, between OpenAI and
0:46
Google for the ever -changing
0:48
title of most powerful AI
0:50
model in the world.
0:52
And I think now, with
0:54
OpenAI's new O3 specifically, it
0:56
is the most powerful. AI
0:59
model in the world. Is it
1:01
the most flexible? Will it
1:03
be the most used model?
1:05
I don't know, but we're
1:07
going to be going over
1:09
that and a lot more
1:11
today on everyday AI. As
1:13
we talk about the new
1:15
OpenAI's 03 and 04 mini
1:17
models unlocked inside the world's
1:19
newest, most powerful AI models.
1:22
All right, what's going on, y 'all? My name
1:24
is Jordan Wilson, and I'm the host of
1:26
Everyday AI, and this thing, it's for you.
1:28
It is your daily live stream podcast and
1:39
You are in the right place. So
1:42
you need to go to your everyday
1:44
ai.com. And there on our website, you
1:46
can not just sign out for our
1:48
free daily newsletter where we will be
1:50
recapping the most important aspects of this
1:52
show and sharing a lot more.
1:55
But we are going to share with
1:57
you everything else that's happening in the
1:59
business world, in AI world. So you can
2:01
be the smartest person in AI
2:03
at your company or in your department.
2:05
All right. So make sure if
2:07
you haven't already to go to your
2:09
everydayai.com to do that. So
2:11
I am very excited today to
2:13
talk about the new o3 and
2:16
o4 models from open AI. But
2:18
before we do, uh, let's start
2:20
as we do most days by
2:22
going over the AI news, uh,
2:24
and hey, live stream crew is
2:26
technically a two part show. So
2:28
I need your help. Uh, let
2:31
me know as I go over
2:33
the AI news, what o3 use
2:35
cases should we cover in tomorrow's
2:37
show in part two? All right.
2:39
Here's what's happening in the world
2:41
of AI news. A couple of
2:43
big things. So Chinese tech giant
2:45
Huawei is preparing to begin mass
2:48
shipments of its new 910C AI
2:50
chip in May, aiming to fill
2:52
the gap left by US restrictions
2:54
on Nvidia's 20 chips
2:56
according to Reuters. So
2:59
the new chip from
3:01
Huawei, the 910C, achieves
3:03
performance comparable to Nvidia's
3:06
H100 by combining two
3:08
existing 910B processors, representing
3:10
a key shift for
3:12
Chinese AI developers who
3:14
need domestic alternatives. So
3:17
Washington's latest AI export
3:19
controls have pushed Chinese AI
3:21
companies to seek more
3:24
homegrown solutions, making Huawei's 910C
3:26
likely to become the
3:28
main AI chip for China's
3:30
tech sector. So yeah,
3:33
it looks like Nvidia could
3:35
potentially have a strong
3:37
new competitor in Huawei. All
3:40
right, next, a small thing.
3:42
But I think that could
3:44
have a big impact. So open
3:46
AI has quietly introduced memory
3:48
with search. much different than their
3:50
memory feature they rolled out
3:52
about two weeks ago. So this
3:54
allows chat GBT to use
3:56
personal details from prior chats specifically
3:58
to tailor web search queries.
4:01
All right. So yes, uh, open
4:03
AI rolled out their expanded
4:05
memory feature a couple of weeks
4:07
ago that allows chat GBT
4:09
to use personal details, but that
4:11
did not apply to web.
4:13
queries. Uh, so this new update
4:15
means chat GBT can now
4:17
rewrite user prompts to reflect individual
4:19
preferences while browsing the web,
4:21
such as, you know, whatever you
4:23
share with it, dietary restrictions,
4:25
location, uh, et cetera, to
4:27
gain, to bring you more
4:29
accurate search results. So this move
4:31
follows recent upgrades that let
4:34
chat GBT reference users entire chat
4:36
history, further distinguishing it from
4:38
competitors that don't have this feature
4:40
enabled. Uh, users
4:42
can turn off this feature in
4:44
settings, but the rollout appears
4:47
to be very limited so far
4:49
with only a few accounts
4:51
reporting early access. So yeah, make
4:53
sure to keep an eye
4:55
out for that. All right. One
4:57
last thing to keep an
4:59
eye out on is while bringing
5:01
AI into the classroom. Uh,
5:03
so the Trump administration is weighing
5:06
in executive order that would
5:08
require federal agencies to promote artificial
5:10
intelligence training in K through
5:12
12. And this is according to
5:14
a draft obtained by the
5:16
Washington Post. This is technically super
5:18
breaking news, only a couple
5:20
minutes old. So the draft policy
5:22
directs agencies to train students
5:25
in using AI and integrate the
5:27
technology into teaching tasks, signaling
5:29
a potential national shift in how
5:31
schools approach technology education. So
5:33
agencies would also partner with private
5:35
companies to develop and implement
5:37
AI related programs for students aiming
5:39
to better. prepare them for
5:42
careers shaped by AI. So
5:44
the proposal is in draft
5:46
form right now is still under
5:48
review and could change or
5:50
be abandoned. However, if it is
5:52
enacted, it could significantly shape
5:55
how the next generation learns and
5:57
works with artificial intelligence. I
5:59
would love to see this happen
6:01
personally this little eight little tidbit
6:03
y 'all I haven't shared shared
6:05
this much, but I just saw
6:07
you know Jackie here in our
6:09
comments holding it down I'm teaching
6:11
a course at to Paul here
6:13
in Chicago and like I'm flipping
6:15
the script on its head I'm
6:17
saying you have to use AI
6:19
at every single junction like don't
6:21
Go old school. Don't write in
6:23
all of these aspects. You should
6:26
be using AI in every single
6:28
aspect. So it should be pretty
6:30
interesting to see how this new
6:32
executive order unfolds and if it
6:34
actually is introduced. All
6:36
right a lot more on
6:38
those stories and a ton
6:40
more on our website your
6:42
everyday AI calm Alright, let's
6:44
get into it Let's talk
6:46
about the newest and I
6:48
think the most powerful AI
6:50
models in the world All
6:52
right from open AI, but
6:54
again, I don't necessarily think
6:56
that means if it's just
6:59
because it's the most powerful
7:01
I don't think it's necessarily
7:03
the best or the most
7:05
flexible. Right. Those are three
7:07
very different things. I do
7:09
think by far the new open
7:11
AI 03, which is the
7:14
full version. And then we have
7:16
the 04 mini and 04
7:18
mini high. Yeah, the naming is
7:20
terrible. Open AI has said
7:22
that they're going to address this
7:24
naming problem because it's extremely
7:27
problematic. Right. But the
7:29
new 03 and 04 models
7:31
are extremely impressive,
7:33
specifically the O3. All right.
7:35
And if you're confused, like, oh,
7:37
Jordan, why is the O3
7:39
better than the O4? Well, that's
7:42
because the O4 is a
7:44
mini. So we have O4 mini
7:46
and O4 mini high. But
7:48
now we have the O3 full
7:50
model, right? Whereas previously we
7:52
had O3 mini and O3 mini
7:55
high. Confusing. But this is
7:57
the first kind of full O
7:59
model that we've had since
8:01
O1. Yes, I know it's confusing
8:03
that to skip O2 because
8:05
of some naming rights with, uh,
8:07
I believe a British telecom
8:10
very confusing with the model names,
8:12
but here is what is
8:14
not confusing. This new model is
8:16
extremely impressive. All right. So,
8:18
uh, live stream audience.
8:21
Good morning, good morning, like
8:23
what Will said here
8:25
on LinkedIn. Love, love
8:27
to see it. Everyone, let me
8:29
know what questions you have
8:31
about this new O3 and O4
8:33
models. You know,
8:35
I'll either tackle them today,
8:37
later on our live stream
8:39
here, or I will... You
8:41
know make sure that we
8:44
do this tomorrow in part
8:46
two. So it's good to
8:48
see everyone on on linkedin
8:50
and on youtube Thanks. Thanks
8:52
for tuning in everyone love
8:54
to see us learning together
8:56
live. All right Let's get
8:58
into it shall we so
9:00
here's the overview on the
9:02
new o3 and o4 models.
9:04
So these were just released
9:06
about a week ago, and
9:08
this is the kind of
9:10
the newest successors in OpenAI's
9:12
O series. So yeah,
9:14
I just laid out a bunch
9:16
of O's, which, which by
9:18
the way, has anyone had O's
9:20
the cereal? I was
9:22
talking about this with my wife. They
9:25
are so underrated, like maybe my
9:27
favorite. top five favorite cereal. That's beside
9:29
the point. But so many different
9:31
O's, right? You have O1 and still,
9:33
right? So they got rid of
9:35
O3 mini high. But, you know, if
9:37
you're on a pro plan right
9:39
now, as an example, I believe you
9:41
have O1, you have O1 pro.
9:44
You have O3 full and then you
9:46
have O4 mini, O4 mini high.
9:48
It's five different O -series models across
9:50
three different classes. Extremely confusing, right?
9:52
And obviously, you know, OpenAI is
9:54
in the future moving away from this
9:56
and treating GPT. five as a
9:58
system. But essentially, if you're wanting what's
10:00
all these O models, these are
10:02
the thinking models. These are the models
10:04
that can reason and plan ahead
10:07
step by step under the hood before
10:09
they give you a response. Whereas
10:11
the GPT models, so
10:13
as an example, GPT
10:15
four or GPT 4
10:17
.5. They are more instantaneous,
10:19
right? They're not necessarily thinking like
10:21
a human would step by step using
10:23
this chain of thought reasoning under
10:25
the hood before it gives you a
10:28
response. So I like to say
10:30
there's two very different classes of models
10:32
from open AI. You have your
10:34
quote unquote old school transformers, and then
10:36
you have your quote unquote new
10:38
school O -Series model, which are your
10:40
thinkers and your reasoners. All right. So
10:42
this was just released. less
10:44
than a week ago. And here's
10:47
the biggest part. It is capable of
10:49
using all of open AI's tools,
10:51
which is the biggest differentiator between the
10:53
01 and the 03 models that
10:55
could not use every single tool. Because
10:57
when we talk about agentic AI
10:59
and yeah, that's what I think 03
11:02
is. It is an agentic. Model
11:04
at its core and we're gonna see
11:06
that I think tomorrow when we
11:08
go through some of these use cases
11:10
alive But the biggest difference or
11:12
one of the biggest differentiators here is
11:15
oh three can use all tools
11:17
web search Python
11:19
file uploads, computer vision with
11:21
the visual input reasoning, and
11:23
also image generation. It can
11:25
literally do everything, whereas the
11:27
previous O -Series models were
11:29
a little limited, right? And
11:31
some of them were different. You
11:34
know, even now you can use
11:36
Canvas, which is more of this interactive
11:38
mode that can run and render
11:40
code inside the new O3 model. Whereas
11:42
before, it's like, okay, the 01
11:44
model, is the only one that could
11:46
use Canvas. But O1 wasn't very
11:48
good at many things because O1 Pro
11:50
and O3 Mini were better, or
11:53
sorry, O3 Mini High. And then O3
11:55
Mini High could use the internet, but
11:57
you couldn't upload files and it couldn't
11:59
use Canvas, right? And then you had
12:01
O1 Pro that you could upload files,
12:03
but you couldn't use Canvas and it
12:05
couldn't browse the web, right? So it
12:07
was kind of hard with all these
12:09
different O models. And, you know, they
12:11
all kind of had their own kind
12:14
of unique features. But
12:16
now, O3, I do think
12:18
this is an agentic model,
12:21
right? And I know that
12:23
sounds crazy to say, but
12:25
it is extremely powerful and it
12:27
can use every single tool
12:29
under its kind of tool belt.
12:31
And it's trained to autonomously
12:33
decide when and how to use
12:35
these tools. That is what
12:37
I think makes it probably the
12:39
most powerful AI model in
12:41
the world. And it responds with
12:43
rich answers, typically in under
12:45
a minute. And it
12:48
is right now, if you have
12:50
a paid plan to chat, tbt,
12:52
you have access to it. So
12:54
whether that's chat, tbt plus pro
12:56
teams, et cetera, you have access.
12:58
It's also available in the API. There
13:01
are limits though. All right,
13:03
so if you are on either
13:05
a chat GPT plus account,
13:07
that's your standard paid account at
13:09
$20 a month, or if
13:11
you're on a team account or
13:13
enterprise account, it's pretty limited.
13:15
So you only have 50 messages
13:17
a week with the best
13:19
one, which again is 03. Not
13:21
04, right? So 04 mini
13:23
is not the best one. 03 is, right?
13:25
I'm just going to say 03 full. That's
13:27
what a lot of people, including myself, are
13:29
calling it since we previously had the 03
13:31
mini. And then we're having to deal with
13:34
the 04 mini and people are confused. So
13:36
03 full. is the best model. But right
13:38
now, if you're on a paid plan, you
13:40
only have about seven messages a day, or
13:42
about 50 messages a week. So not a
13:44
ton. With 04 mini, you
13:46
have 150 messages a day. In
13:48
04 mini high, you have 50
13:50
messages a day. So if you
13:52
are a power user on a
13:54
paid plan, you might want to
13:56
start with 04 mini high. You
13:59
have 15 messages a day and
14:01
then maybe save those seven messages
14:03
a day for the time that
14:05
you really need a little more
14:07
juice, a little bit
14:09
more compute, more smarts, then you
14:11
can hand those over to O3Full. If
14:14
you are on the pro
14:16
plan, which is $200 a month,
14:18
you have quote -unquote near unlimited
14:20
access. OpenAI
14:22
says, yeah, there's some fair
14:24
use things that you have to
14:26
adhere to, but for the
14:28
most part, It is unlimited. Uh,
14:31
so I have free plans.
14:33
Uh, a month plans.
14:35
I have multiple team plans. I have
14:37
multiple enterprise accounts, uh, for companies that
14:39
hire us, uh, to train their employees.
14:41
So yeah, if you're trying to do
14:43
that, you can reach out to us.
14:45
We can train your team. Uh, right.
14:47
So it is kind of weird. I'd
14:50
say, uh, that the. Teams account in
14:52
the enterprise accounts have the same model
14:54
as the plus account you would think
14:56
or hope it would have 2x 3x
14:58
Especially the enterprise y 'all open AI you
15:00
got to get together. I'm hearing a
15:02
lot of grumblings From companies that have
15:04
invested heavily into enterprise accounts and they
15:06
can't you know, they can't get kind
15:09
of the same power that you can
15:11
get with an individual account. I know
15:13
it, it comes with a pricing, uh,
15:15
right? Uh, paying, uh, I think anywhere
15:17
between $30 to $50 for an enterprise
15:19
seat versus $200 for a pro seat.
15:21
But so many of these companies are
15:23
investing in hundreds or thousands of seats
15:26
for their enterprise teams. Open
15:28
AI, you gotta give them more juice. Just saying.
15:30
All right. So what
15:32
the heck is new? Let's
15:34
go over it. So. advanced
15:36
tool use. So like I
15:38
talked about, it has autonomous
15:40
access to browsing, coding, and
15:42
visual tools. The image
15:44
understanding, it is improved. The
15:47
visual capabilities are much improved. And
15:50
O3 does a great job
15:52
at interpreting complex visual inputs,
15:54
like as an example, research
15:56
papers. It has a
15:58
much larger context window in
16:00
the chat GPT interface. Finally, uh,
16:03
right. So finally within the
16:05
chat, GPT interface, we have a
16:07
200 K token context window. Okay.
16:10
So you, it can handle longer
16:12
multi step tasks seamlessly. And you
16:14
can share a ton of information
16:16
without it for getting things. Whereas
16:19
previously, you know, unless you were
16:21
on an enterprise plan, we still,
16:23
for the most part had a
16:25
32 ,000 token context window on
16:27
the. chat side of chat GBT,
16:29
right? It was different on the
16:31
API side, but a lot of
16:34
users inside of chat GBT, if
16:36
they were especially copying and pasting
16:38
a lot of information, chat GBT
16:40
was forgetting things, right? Because that
16:42
32 ,000, uh, context window, it's about
16:44
27, 28 ,000 words of input
16:46
and output, which isn't a ton.
16:49
Uh, so it's, it's a welcome
16:51
site to see a 200 K
16:53
token context, uh, in the new
16:55
models improved reasoning, uh, another thing
16:57
that it's new. and the ability
16:59
to chain multiple tool calls together
17:02
for layered analysis. And I think
17:04
that is probably the standout feature.
17:06
And there's some new safety features
17:08
as well, right? OpenAI doesn't want
17:10
to accidentally start. a biochemical war, which
17:13
might, you might be like kind
17:15
of chuckling and rolling your eyes, but
17:17
no, seriously. So
17:19
good on OpenAI for addressing these
17:21
things, you know, when they
17:23
release new models and they give
17:25
it essentially levels or warning
17:27
levels. So they address that on
17:29
their website as well. And
17:32
there's new features that can reduce
17:34
the risk and enhance trust. All
17:37
right. Are
17:43
you still running in circles trying to figure
17:45
out how to actually grow your business with
17:47
AI? Maybe your company has
17:49
been tinkering with large language models for
17:51
a year or more, but can't really
17:53
get traction to find ROI on Gen
17:56
AI. Hey, this is Jordan Wilson, host
17:58
of this very podcast. Companies
18:00
like Adobe, Microsoft and Nvidia have partnered
18:02
with us because they trust our expertise
18:04
in educating the masses around generative AI
18:06
to get ahead. And some of the
18:08
most innovative companies in the country hire
18:11
us to help with their AI strategy
18:13
and to train hundreds of their employees
18:15
on how to use gen AI. So
18:17
whether you're looking for chat, GPT training
18:19
for thousands or just need help building
18:21
your front end AI strategy, you can
18:23
partner with us too. just like some
18:26
of the biggest companies in the world
18:28
do. Go to your everydayai.com slash partner
18:30
to get in contact with our team.
18:32
Or you can just click on the
18:34
partner section of our website will help
18:36
you stop running in those circles and
18:39
help get your team ahead and build
18:41
a straight path to ROI on gen
18:43
AI. And
18:48
if you're a little confused and
18:50
you're like, wait, this is the new
18:52
feature. I thought it was a
18:54
different feature. Yeah. Let me
18:56
quickly get you up to speed. If
18:58
you've been sleeping under an AI rock
19:00
for three weeks, here's what else is
19:03
new at open AI and chat GPT,
19:05
because you might be confused. And I
19:07
want to really tell you, no, no,
19:09
no, no, this is separate, right? Uh,
19:11
so yeah, we've been hearing a lot
19:13
of buzz the last couple of weeks
19:15
about this new GPT four, oh image
19:17
gen. Okay. That is different. This
19:19
is, you know, Oh three different beasts
19:22
all together, but it can use, uh,
19:24
the image gen. Uh,
19:26
then in April, we had the
19:28
memory rollout across all chats. So
19:30
essentially, if you have this enabled
19:32
chat, GBT can pull in conversation
19:34
or can pull in information from
19:36
past chats, which is different than
19:38
memories, which were essentially individual nuggets
19:40
that were stored in kind of
19:43
a memory bank. But now chat,
19:45
GBT, it does this via kind
19:47
of a search poll, uh, in
19:49
a semantic, uh, you know, keyword
19:51
matching, um, and then to live,
19:53
you kind of personalized results. I
19:55
personally. hate this, right? Because it's
19:57
always trying to personalize things based
19:59
on my past chats, but that's
20:01
new. All right. And then we
20:03
also had the Google drive connector
20:06
roll out for chat GBT teams
20:08
accounts, uh, about three weeks ago.
20:10
And then also last week, uh,
20:12
we got, was that last week?
20:14
Yeah. My weeks are starting to
20:16
blur together, y 'all. Uh, so
20:18
yeah, it was last Monday that
20:20
open AI released another set of
20:22
new models. So. Don't get confused. These
20:25
other models were
20:27
GPT -41, GPT -41
20:29
Mini, and GPT -41 Nano.
20:32
However, those are not available
20:34
inside chat GPT. Those
20:36
are only available on the
20:38
developer side. All
20:41
right. So I think those kind of the
20:43
highlights of those Context
20:46
window to a million tokens,
20:48
huge. Actually, the
20:50
GPT -41 mini was stealing a
20:52
lot of the headlines, rightfully
20:54
so, because it was really
20:56
outpunching its mini moniker. But
20:59
the 401 models, I
21:01
think, were much better in
21:03
coding and just a
21:05
pretty big improvement both on
21:07
cost and performance when
21:09
it came to the model
21:11
that it was following
21:13
in GPT -40. Alright,
21:15
so these new O -Series models are
21:17
not that, right? But I do think
21:19
it was worth pointing out. Yeah,
21:21
there's been a lot of new things
21:23
happening, uh, inside chat gpt that
21:25
are not these O series models. So
21:28
I figured I'd take two minutes
21:30
here, uh, to get you caught up.
21:32
Yeah. Uh, like what Jackie's saying,
21:34
uh, need a cheat sheet, a cheat
21:36
sheet. Yeah. Maybe I should create
21:38
one. Uh, Kevin is saying, uh, uh,
21:40
Kevin from, uh, YouTube is saying
21:42
it's annoying in the paid education version.
21:44
I still can't access it. So
21:46
I'm guessing. Kevin, you're talking about O3.
21:49
Uh, yeah, it should, it should be rolling
21:51
out. You know, I know this sounds weird.
21:53
It's kind of like, Oh, you know, restart
21:55
your computer, you know, take out the SNES
21:58
cartridge and blow on it. Right. So many
22:00
times it is like a cookie issue, uh,
22:02
or a cashing issue. So if you, you
22:04
know, log out, if you're a chat, GBT
22:06
account, maybe clear your cash and log back
22:08
in, it might be there. That's actually the
22:10
way I always do it. Uh, whenever there's
22:12
new models announced, I do that like two
22:14
or three times a day, uh, to try
22:16
and get access a little earlier, even though
22:18
open AI does kind of. control those rollouts. All
22:20
right. Let me answer the question. Is
22:23
this the best model in
22:25
the world? So
22:28
yes and no, I think it
22:30
is the most powerful AI model
22:32
in the world. I think best
22:34
depends on your use case. Uh,
22:36
is it the most flexible right
22:39
now? No. So let me say
22:41
that again. I, yes,
22:43
I 100 % believe it
22:45
is the most powerful AI
22:47
model in the world. It
22:49
is not the most flexible.
22:51
And if it's the best
22:53
depends on your use case.
22:55
So obviously right now it's
22:57
kind of jabbing back and
22:59
forth with the Gemini 2 .5
23:01
Pro from Google. And we'll
23:03
see as, you know, more
23:05
user feedback starts to roll
23:08
out. But when it comes
23:10
to just pure upside, just
23:12
the ceiling, strictly power. I
23:15
think 03 is unmatched right now. Does
23:19
it like, does that mean that
23:21
I'm only like, right? Does that mean
23:23
me personally? I'm only going to
23:25
be using 03. Absolutely not. Right. I'm
23:27
still going to be using Gemini
23:29
2 .5 pro all the time. The
23:31
big difference is y 'all, and we're
23:33
going to talk about this a little
23:35
bit with benchmarks. Uh, Gemini 2 .5
23:37
pro is a hybrid model, which
23:39
makes it much more. flexible because in
23:41
certain instances, especially if you're having
23:43
iterative conversations back and forth conversations, uh,
23:45
with a model, which is what
23:47
you should be doing. Sometimes if you're
23:49
using these O -series models, you can
23:51
ask a very simple query or
23:53
a very simple follow -up query and
23:55
it might think for like minutes, right?
23:58
So in terms of flexibility
24:00
and usability. might
24:02
not always be the best for some
24:04
of those conversations that are a
24:06
little more nuanced and don't just require,
24:09
you know, big AI brains. But
24:11
if you need big AI
24:13
brains in an agentic type
24:15
of large language model interface,
24:18
oh, three is it. And it
24:20
is so, so impressive. Right.
24:24
But let's look at some of
24:26
the benchmarks. And here's here's one
24:28
thing that I kind of wanted
24:30
to call out. Right. So on
24:32
this show, we talk a lot
24:34
about the LM arena, right? And
24:36
this thing called an ELO score. And
24:39
what that means is you put
24:41
in a prompt, OK, and then
24:43
you get two blind outputs. And
24:45
you decide which one is better,
24:48
output A or output B. All
24:50
right. And that essentially over time, when
24:52
there's enough votes, a new model that
24:54
gets released gets an ELO score. Essentially,
24:56
you know, it comes from ELO scores
24:58
and chess, and it's like, hey, head
25:00
to head, this is what humans prefer
25:03
the most. So right now, the top
25:05
on that list is Gemini 2 .5
25:07
Pro. And here's why
25:09
I'm bringing this up as a
25:11
caveat. Right now, O3, full.
25:13
does not yet have enough votes
25:15
to be on the, uh, chatbot
25:18
arena leaderboard that could change in
25:20
a couple of hours or in a
25:22
couple of days. It could be
25:24
up there pretty soon. However, I do
25:26
not expect the O three full
25:28
model to do very well when it
25:30
comes to head to head human
25:33
comparisons. And here's the reason why when
25:35
you look at O three mini
25:37
high, right, which was my workhorse model,
25:39
right before Gemini 2 .5 pro came
25:41
out. I'd say Oh, Oh
25:43
three many high. That was
25:45
getting about 60 % of my
25:47
usage. Humans
25:49
head to head for the most
25:52
part don't prefer it, right? Um,
25:55
and one of the reasons
25:58
why think you have these
26:00
traditional large language models. that
26:02
focus on kind of quick
26:04
snappy responses. You have
26:06
these thinking models, which just take
26:08
longer and really only showcase
26:10
their abilities when it comes to
26:12
when you're asking it for
26:15
a very tough question, right?
26:17
And then you have your
26:19
hybrid models. So I think, ultimately,
26:21
the hybrid models are going to be
26:23
the ones that on a head -to -head
26:26
ELO score, those are going to be
26:28
the ones that do best. I don't
26:30
think these thinking models uh, strictly thinking
26:32
models are ever going to do that
26:34
great in human comparison. The way I
26:36
think about it is like, okay, think
26:38
of someone, you know, that's, you know,
26:40
super personable and has a ton of
26:43
business savvy and is super smart, right?
26:45
That's like Gemini 2 .5 pro. Then
26:47
you think of something like
26:49
Einstein, right? And a lot
26:51
of people, what they're putting
26:53
queries, uh, you know, into
26:56
LL Marina, you know, it's,
26:58
it's. Kind of quippy things,
27:00
fun things, right? Like, you
27:02
know, write me a haiku
27:04
about explaining large language models
27:06
using basketball terms, right? Not
27:10
something that an Einstein level
27:12
model wouldn't necessarily excel at. So
27:14
I'm just putting this out
27:16
there. Once the O3 model full
27:18
hits the chatbot arena, I
27:20
don't necessarily foresee it, you know,
27:22
being a top, you know,
27:24
a top three model. I do
27:26
think probably Gemini 2 .5 Pro,
27:28
because it is a hybrid
27:30
model, will still retain its lead
27:32
on that specific benchmark. However,
27:37
however, look
27:39
at some of the other
27:42
comprehensive sets of benchmarks that
27:44
have already gone through with
27:46
the new O3 full, or
27:48
as some people are calling
27:50
it, O3 high. And
27:53
it's the best. So as
27:55
an example, if you look at
27:57
live bench, okay. So live
27:59
bench is a benchmark for large
28:01
language models designed with test
28:03
sets contamination and objective evaluation in
28:05
mind. So I'm reading off
28:07
their website here. Uh, it has
28:09
the following properties. Live bench
28:11
limits potential contamination by releasing new
28:13
questions regularly. So then that
28:15
way it won't get into, uh,
28:17
you know, models, uh, testing
28:19
sets. Each question has
28:21
verifiable objective ground truth answers,
28:23
right? So it eliminates kind of
28:25
the need for a large
28:27
language model judge. So it's factor
28:29
fiction, no gray area. And
28:31
then live bench currently has a
28:33
set of 18 diverse tasks
28:35
across six categories, right? So
28:38
language, data analysis, math, coding,
28:40
reasoning, et cetera. And then you
28:42
have a global average. So
28:44
on live bench, which I think
28:46
is a good third party
28:49
benchmarking system, O3 is better than
28:51
Gemini 2 .5 with a global
28:53
average of 81 .5 and Gemini
28:55
2 .5 is the next best
28:57
model aside from OpenAI's O
28:59
models, which actually take up the
29:01
first three spots. So Gemini
29:04
2 .5 comes in at a
29:06
77 .4. So O3
29:08
high much better at
29:10
81 .5. Similarly, another one
29:12
that we talk about a
29:14
lot is the artificial
29:16
analysis index. So again
29:18
a very reputable and I'd
29:20
say probably one of the
29:22
most trustworthy third -party benchmarking
29:24
services out there So they
29:26
haven't done o3 full yet.
29:28
I believe because not all
29:30
of the capabilities are available
29:32
in the API Whereas on
29:34
o4 mini high they are
29:36
okay, so on o4 mini
29:38
high, which is a mini
29:40
model on the intelligence index
29:42
It is the best model
29:44
or the most powerful model
29:46
in the world. All right. So
29:49
right now it
29:51
is ahead of Gemini
29:53
2 .5 Pro by
29:55
two points. All
29:58
right. And this I think
30:00
is pretty important because again,
30:02
you are comparing a mini
30:04
model. So I assume once
30:06
the full model is put
30:08
through some of these tests,
30:10
it will be even further
30:12
ahead. But the 04 mini
30:14
high is two points ahead. Of
30:17
Jeff and I 2 .5 pro.
30:19
So when it comes to
30:21
unbiased third party benchmarks that look
30:23
at a lot, uh, it
30:25
has been decided. Oh, three and
30:27
Oh, four, right? This is
30:29
the most powerful model in the
30:31
world. Could Google clap back
30:33
next week and release a brand
30:35
new, you know, 2 .6 pro.
30:37
Absolutely. I'm sure they have
30:39
something ready to go. But today,
30:41
if you are looking for
30:43
the most powerful model in the
30:45
world, Oh, three. and 04. That's
30:47
where it's at. So
30:50
the standout feature, which is something that
30:52
we're going to be doing in part two
30:54
tomorrow. Uh, and let me
30:56
know again, what use
30:58
cases do you want to see in
31:00
our part two, but the standout feature
31:02
by far is a genetic tool use.
31:04
So if you're listening on the podcast,
31:06
this'll make a little bit more sense
31:09
on the live stream where I have
31:11
a couple of graphics here. Okay. But
31:13
as an example, and this is from
31:15
open AI's, um, Uh, kind of website,
31:17
uh, going over 03. It says, I
31:19
took this pic earlier. So again, visual
31:21
understanding, the ability to reason with photos
31:23
and use, uh, kind of on its
31:26
own, uh, terms, decide when and
31:28
how often to use these tools. So
31:30
it says, I took this pic earlier.
31:32
Can you find the name of the
31:34
biggest ship you see and where it
31:36
will dock next? All right. This is
31:38
tricky because there's in this photo that
31:40
they upload. All right. The ships are
31:42
number one. They're out of focus. They're
31:44
a little blurry, but also
31:46
they're at different perspectives, right?
31:49
So it could be one ship
31:51
just appears bigger because it's closer
31:53
and the other ship could be
31:55
larger, but it's further away. So
31:58
it reasoned for only a minute
32:00
and a half and it even is
32:02
talking it through. Right. So like,
32:04
here's kind of the, the chain of
32:06
thought or the reasoning that the
32:09
model is going through. It says, I
32:11
think I miss the ships in
32:13
the crop. They seem to be off
32:15
to the left, which my human
32:17
eye did not even see this. It
32:19
says, I'll zoom in to better
32:21
inspect. Then after it literally
32:24
crops in, zooms in, gets
32:26
a clear, uh, kind of
32:28
view of the photo. Then
32:30
it says, okay. I see
32:32
the ships now. The
32:34
largest one appears to be the
32:36
Red Hulled Tanker on the right, with
32:39
another smaller ship in the center.
32:42
The user wants to know the name of the
32:44
largest ship and where it will dock next. At
32:46
this distance, the name of the
32:48
hull may be too blurry to read.
32:50
Maybe zooming in further will help
32:52
get a clearer view. So it essentially
32:54
enhances the image, continues to zoom,
32:56
and that it decides at a certain
32:58
point, okay, I've
33:00
now understood the location, right?
33:03
So then it goes on and
33:05
it uses things like location data.
33:07
It looks up using the internet
33:09
to correctly identify what that ship
33:11
actually is. So I
33:13
also, there is a browse
33:16
comp, a genetic browsing benchmark from
33:18
open AI. And I think
33:20
this is worth pointing out because.
33:23
If you've ever used the 4 .0
33:25
model, and if you've uploaded an
33:27
image and then had it go
33:29
browse, such as the case in
33:31
this example, 4 .0 is
33:33
not good, right? So
33:35
it only has a
33:37
1 .9 % accuracy rate. Whereas
33:39
now, right, when you
33:41
look at 03 with Python, Okay.
33:44
So again, that means it can,
33:46
uh, kind of create its own
33:48
code and render code to help
33:50
solve problems on the fly. So
33:52
when you have this new reasoning
33:54
model that has a better visual
33:57
understanding, it can run code to
33:59
help it solve problems and it
34:01
can browse the internet. That 1 .9
34:03
% accuracy from four O with
34:05
browsing goes to nearly 50 % with
34:07
O three. An extremely
34:10
impressive job. All
34:12
right. Um, and also
34:14
FYI, I threw this in here, uh,
34:16
should have been a couple slides
34:18
back, but we did cover, uh, when
34:21
we talk about use cases, since we're going to
34:23
be jumping into use cases tomorrow, uh,
34:25
there's actually some use cases. I think a
34:27
lot of people are sleeping on that we
34:29
went over in the new 40 image gen,
34:31
but this also the new model can do
34:33
image gen in 03. So
34:36
here's the overall features and
34:38
takeaway as we wrap up today's
34:40
show. So. It is, O3
34:42
is a powerhouse of reasoning. It
34:44
excels in coding math, the
34:46
science and visual tasks. So
34:48
it provides deep insights and
34:50
complex solutions. And it does this
34:53
by tackling intricate coding science
34:55
data and creative tasks. It can
34:57
quickly analyze complex data sets.
34:59
Yeah, you can upload files and
35:01
it can create a new
35:03
intelligence with those files that you
35:06
upload for human level insights.
35:08
It thrives where deep understanding and
35:10
factual accuracy. are essential and
35:12
it's ideal for applications demanding high
35:14
-level expertise, right? So if you've
35:16
used OpenAI's deep research, it
35:19
actually, that was the
35:21
only, I guess, tool or mode
35:23
previously that used O3, the full
35:25
version, right? Whereas, you know, for
35:27
the last couple of months when
35:29
we've had deep research, It
35:31
was it was not using o3 mini,
35:33
right? And there's a huge jump
35:36
between o3 mini and this o3 full
35:38
or o3 high, whatever you want
35:40
to call it, right? And it does
35:42
a fantastic job of this agentic
35:44
browsing on the web and iterating, uh,
35:46
and kind of, uh, changing course
35:48
midway through, uh, again, depending on, uh,
35:51
what you, uh, start with, and
35:53
it's ideal for applications demanding high level
35:55
of expertise. Oh, for
35:57
many, from being honest, uh, in
35:59
less. you're using 04 mini
36:01
because you don't want to run
36:03
out of prompts, right? Of
36:06
those like 50 messages a week.
36:08
Otherwise, there's no reason to use it
36:10
on the front end. There's not,
36:12
but I think 04 Mini will be
36:14
probably in the long run more
36:16
for developers because right now it's faster
36:18
and it's more efficient. So the
36:20
big thing with 04 Mini here, it's
36:23
speed, scalability and efficiency. It's a
36:25
smaller model, but it balances reasoning with
36:27
computational efficiency and it excels where
36:29
speed and costs are key and it's
36:31
ideal for high volume use. It's
36:33
quicker, yet it is still insightful in
36:35
interpreting data and it streamlines workflows
36:37
with adaptable processing and to connect. So
36:39
yeah, I don't think if you're,
36:42
uh, on a paid plan, uh, you
36:44
know, in using chat, GPT on
36:46
the front end, you should probably never
36:48
prefer to use 04 mini. It
36:50
should really only be if you've kind
36:52
of hit your quota, uh, for
36:54
the week with 03. But, you know,
36:56
if you're a casual user and
36:58
you're like, okay, 50 messages a week,
37:01
I can get by with that
37:03
for 03. You shouldn't be
37:05
using O4 Mini, but if you're
37:07
a power user, yeah, you might
37:09
have to use O4 Mini for
37:11
some of those tasks and then
37:13
kind of pocket O3 for the
37:15
more complex things or things that
37:17
require, you know, kind of juggling
37:19
these tools. And that's ultimately where
37:22
O3 excels in, you know, it's
37:24
a genetic use of multiple tools
37:26
and researching in changing course. It's
37:28
extremely impressive. So tool chaining,
37:30
that's something you're probably going to start
37:32
hearing a lot. And that's why it's
37:34
important. And that's why I think what
37:36
makes it the most powerful model in
37:38
the world is the ability to use
37:40
multiple of these tools at the same
37:42
time for you to be able to
37:45
upload files for you to start with
37:47
computer vision, right? Or start by, you
37:49
know, uploading a photo and have it
37:51
to be able to reason over that
37:53
photo. The ability to. Essentially
37:55
do deep research, right? So it's
37:57
not just blanket doing one search
37:59
and pulling in all of that
38:02
aggregate data and thinking over it
38:04
at once. It's going literally
38:06
step by step and it's researching. And
38:08
if you find something in its research,
38:10
I've seen this, it will change course.
38:12
I've had it a couple of times.
38:14
Start by using computer vision. Then it
38:16
goes and starts on the web. Then
38:18
it goes and starts using a Python
38:20
to create something. And then in the
38:22
middle of that, it's like, Oh, wait,
38:25
I need to go back. Uh, to
38:27
the web. And then it's like, Oh,
38:29
wait, I need to go zoom in
38:31
on that photo. Right. So that's where
38:33
this really excels in this, in kind
38:35
of a special sauce and why, like,
38:37
when I first started using this, my
38:39
job kind of dropped, which is hard
38:41
for me to do as someone that
38:43
spends so much time on AI tools
38:45
is it's a genetic. tool chaining and
38:48
putting these different capabilities together and deciding
38:50
on its own when it should use
38:52
what tool and then going back and
38:54
reiterating on its own so it can
38:56
think with images it can crop zoom
38:58
and rotate visuals during analysis. 200
39:01
K token context is
39:03
great for deep layered workflows.
39:05
And then to seamlessly
39:07
chain together tools, the web,
39:09
Python and image gen
39:11
for complex queries, like forecasting
39:14
things, right? And then
39:16
to have this autonomous decision making. So,
39:19
um, complex
39:21
queries, this is your
39:23
model, right? Because of
39:25
that autonomous ability to chain together
39:27
these different tools. So. Google
39:30
has a shorter, smaller version of
39:32
this, but for the most
39:34
part, when I'm using Gemini 2
39:36
.5, I don't see Gemini 2
39:38
.5's ability to go back and
39:40
forth and reiterate on its
39:43
tool use. So yes, it
39:45
can create things in its canvas mode
39:47
in Gemini 2 .5 pro. Uh, it
39:49
can query on the web, but for
39:51
the most part, it is more of
39:53
this unilateral approach where, uh, oh three
39:55
does these in parallel and it. And
39:58
it iterates on its own tool use,
40:00
right? Which is, it is, right? I
40:02
don't know. People remember when I used
40:04
to talk about plugin packs and how
40:06
they were so powerful back when chat,
40:08
she had plugins and I'm like, y 'all
40:10
are missing the big thing here, right?
40:12
And it hasn't been until now that
40:15
I've had that same feeling because essentially,
40:17
right? Uh, you look at these different
40:19
agentic tools. kind
40:21
of like plugins or tasks, right? So
40:23
part of it will analyze the image
40:25
and then it'll use that information to
40:27
go find, you know, updating information on
40:29
the web. Then it will pull that
40:32
and maybe start using Python. Then it'll
40:34
look at the image again. So I
40:36
almost think of it as kind of
40:38
like multiple specialists working together, but they'll
40:40
work one at a time and then
40:42
the researcher will come and find things
40:44
and then bring, you know, bring that
40:46
back to the data analyst, which is,
40:48
you know, Python. Uh, right. And it'll
40:51
keep working iteratively and then even use
40:53
the canvas mode. So it's almost like,
40:55
you know, you have a UI UX
40:57
designer, right? So it does all of
40:59
these things iteratively where I don't think
41:01
we've really had that with any models,
41:03
right? So even, uh, with Gemini 2 .5
41:05
pro again, this model hasn't been out
41:08
for very long. It does seem and
41:10
feel and under the hood look like
41:12
a more unilateral approach, uh, where I
41:14
think. where O3 shines is that it
41:16
can adapt its own strategy on the
41:18
fly. It reacts to information, it
41:21
refines its tool use, and
41:23
it can tackle those tasks
41:25
requiring up -to -date data, expanded
41:27
reasoning, and diverse outputs. All
41:30
right. That's a... wrap
41:32
y 'all I'm gonna I'm gonna scroll through
41:34
and if I see any questions Joe
41:36
just says thanks for this report very
41:38
helpful I wonder how open AI has
41:40
resolved Inter model communications for chaining. Yeah,
41:42
we'll see right so We have heard
41:44
and this has been pushed out right
41:47
that in the future you're not going
41:49
to be able to decide which model
41:51
to use right and GPT -5 will
41:53
actually be an architecture that houses some
41:55
of these modes or some of these
41:57
models under the hood and you may
41:59
not get to choose I don't want
42:01
that to happen. I don't want GPT -5,
42:04
right? I want to be able to
42:06
choose my own models, right? So it
42:08
should be interesting to see how that
42:10
happens. All right, we
42:12
have a LinkedIn comment here. Someone said,
42:14
in your newsletter, you mentioned you
42:16
have been struggling to push past O3's
42:18
limits. And we'd love to hear
42:20
more about that. What limits have you
42:22
been pushing? Yeah, great question.
42:25
And yeah, sorry, for whatever reason, LinkedIn
42:27
settings, I don't see your name. It's
42:30
been very easy for me to push
42:33
models to the limit and one of
42:35
the reasons is you give them complex
42:37
tasks that would normally unfold over the
42:39
course of like an hour long conversation,
42:41
right? You know
42:43
saying hey analyze this
42:45
photo, then go create
42:47
a chart where you forecast something
42:50
based on information that you
42:52
pull from this photo. So as
42:54
an example, here's a photo
42:56
with a bunch of AI tools. And
42:58
this is probably an example I'll do tomorrow.
43:00
Go look up pricing for all these
43:02
tools. Go look up. uh, you know, what's
43:04
included on a free and paid tier,
43:07
then, you know, using, uh, you
43:09
know, kind of your coding abilities, create
43:11
a chart, uh, but then go out and
43:13
also create, I don't know, a website
43:15
or an interactive graph on this. So, you
43:17
know, it's, it's been difficult for me
43:19
to kind of break. some of these models
43:21
because they don't have essentially complex tool
43:23
use and o3 does and it seems like
43:25
at least in my very initial testing
43:27
which hasn't meant a lot right i've probably
43:30
only been able to give o3 i
43:32
don't know maybe uh 10 or so hours
43:34
so far i've been very busy i'd
43:36
uh a keynote in a workshop and I
43:38
moderated a panel at 1871 and, uh,
43:40
you know, planning all these episodes. So I
43:42
haven't had my normal amount of time,
43:44
you know, we had the Easter weekend. So
43:46
I was, uh, you know, trying to
43:48
spend as much time with family as possible.
43:51
Uh, so I haven't had as much
43:53
time to break it, but I haven't been
43:55
able to break 03 yet because it's
43:57
extremely, uh, extremely capable. So
43:59
McDonald asking, do
44:01
you recommend using this for building
44:04
games? It depends, right? I still would
44:06
probably start that in Gemini 2 .5
44:08
Pro again, just because O3 is
44:10
the most powerful model in the world
44:12
does not mean it's necessarily the
44:14
best. I think the use cases are
44:16
gonna be when you need to
44:19
string together all of these agentic use
44:21
cases. At least for me, if
44:23
I'm looking for... off, you know, building
44:25
games as an example. I'm not
44:27
a coder, but I would probably still
44:29
do that in Gemini 2 .5 pro.
44:31
It's going to be faster and
44:33
its coding capabilities are outstanding. Alright,
44:36
let me just real quick before we
44:39
wrap this up See if there's any more
44:41
questions. I always try to get to
44:43
questions at the end big bogey face from
44:45
YouTube saying why use a sledgehammer when
44:47
a rock and hammer will do Yeah, that's
44:49
a great point Renee is asking what
44:51
about man is so man is is a
44:53
little different. You have to choose a
44:55
model uh, for Manus, uh, Manus is not
44:57
publicly available yet. Right. You have to
44:59
get on the waitlist, get access and it's,
45:01
it's, it's different. Right. That's why people,
45:03
you know, sometimes they're like, Oh, you know,
45:05
what about perplexity? Well, perplexity at its
45:08
core is not a large language model. Neither
45:10
is Manus. Right. Manus, you have to
45:12
use a model and then Manus is essentially
45:14
a collection of tools. Uh, and right
45:16
now it runs on Claude's on it. So,
45:18
uh, it is completely different. Uh,
45:20
that is a, a
45:22
true, uh, kind of operating
45:24
agent. whereas this is
45:26
more interfacing inside of a
45:28
chat like you would
45:30
a traditional large language model.
45:33
All right, we have some proposed
45:35
use cases for tomorrow. All
45:37
right, we have one more
45:39
question here from Kiran saying, how
45:41
might the advantage, the advancements
45:44
in the O3 and O4 mini
45:46
models influence the development of
45:48
future AI systems, such as the
45:50
anticipated GPT -5? That's a great
45:52
question, Kiran. I don't have
45:54
the answer, right? I'm lucky enough.
45:56
I have contacts over at
45:58
OpenAI that I chat with. I
46:01
don't know the answer to this. As I
46:03
get the answer, I will get it to you.
46:05
But again, OpenAI has delayed
46:07
GPT -5. And they said
46:09
that they've been struggling to
46:11
essentially put all of these
46:13
capabilities under this kind of
46:16
umbrella and turning it into
46:18
a system. So like I
46:20
said, personally, Personally,
46:23
I'm not looking forward to GPT
46:25
-5. I love, right? Even
46:27
though a lot of people look at
46:29
it as this chaotic mess, I love
46:31
going into my chat GPD account and
46:33
seeing, you know, seven to 10 different
46:35
models to choose from, right? Cause I'm
46:37
a power user. I know what I'm
46:40
doing. And generally I have a better
46:42
idea. than a GPT -5 system probably would
46:44
of knowing which model is best, because
46:46
I've used them all for hundreds of
46:48
hours for my own use cases, right?
46:51
Maria is saying, I'm still waiting
46:53
for the OMG model. For
46:55
me, you know,
46:57
I think the Gemini
47:00
2 .5 was an OMI
47:02
model. And O3 is a...
47:04
Oh my gosh, model,
47:06
right? So I went from OM
47:08
with Gemini 2 .5 Pro, which most most
47:10
of the time models, I'm just like, okay,
47:13
you know, cool. This is nice. Gemini
47:15
2 .5 was oh my and oh three
47:17
was oh my gosh. All right. So we're
47:19
going to continue this tomorrow. So And
47:21
make sure you tune in for part two.
47:23
We're going to be going over different
47:25
use cases, but also let me know what
47:27
do you want to see. So if
47:29
you're listening on the podcast, thanks for tuning
47:31
in. Uh, make sure to go to
47:33
your everyday AI.com sign up for the free,
47:35
free daily newsletter. We're going to be
47:37
recapping the most important takeaways from today's episode,
47:39
but also you can just reply to
47:41
today's email that's going to come out in
47:44
a couple of hours and let me
47:46
know what is the use case you want
47:48
to see tomorrow, right? I really want
47:50
to tackle, uh, things, uh, that are on
47:52
your mind. I call this
47:54
your everyday AI because it's for you. So
47:56
I want to hear from you. Uh,
47:58
what do you want to see this new
48:00
03 model tackle? Yeah, maybe you have
48:02
limited messages and, and you don't have, uh,
48:04
kind of the, uh, message budget, so
48:06
to speak, uh, to tackle this. I've got
48:08
unlimited. Put me to work. Let me
48:10
know what you want me to see or
48:13
let me let me know what you
48:15
want to see in our part two. Uh,
48:17
if this was helpful, please, this would
48:19
help Click that little repost button y 'all,
48:21
uh, share this with your network. Uh, I
48:23
know you're trying to be the smartest
48:25
person in AI at your company in your
48:27
department. That's we try to help you
48:29
with at your everyday AI, but this thing
48:31
only works when you share it with
48:33
others. So if you're listening on social, please
48:35
share it with others. If you're listening
48:37
on the podcast, please follow the show, click
48:39
that little button. Uh, if you couldn't
48:42
leave us a rating, I'd really appreciate it.
48:44
So thank you for tuning in. We'll
48:46
see you back tomorrow and day for more
48:48
everyday AI. Thanks y 'all. And
48:52
that's a wrap for today's edition
48:54
of everyday AI. Thanks for joining us.
48:56
If you enjoyed this episode, please
48:58
subscribe and leave us a rating. It
49:01
helps keep us going for a
49:03
little more AI magic, visit your everydayai.com
49:05
and sign up to our daily
49:07
newsletter so you don't get left behind.
49:09
Go break some barriers and we'll
49:11
see you next time.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More