Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
This episode is brought to you by State
0:02
Farm. You might say all kinds of
0:04
stuff when things go wrong, but these are the
0:06
words you really need to remember. Like
0:08
a good neighbor, State Farm is
0:10
there. They've got options to fit
0:12
your unique insurance needs, meaning you can talk
0:14
to your agent to choose the coverage you
0:16
need, have coverage options to protect the things
0:19
you value most, file a claim right on
0:21
the State Farm mobile app, and even reach
0:23
a real person when you need to talk
0:25
to someone. Like a good neighbor,
0:27
State Farm is there. This
0:30
episode is brought to you by Indeed.
0:33
When your computer breaks, you don't wait
0:35
for it to magically start working
0:37
again, you fix the problem. So
0:39
why wait to hire the people
0:41
your company desperately needs? Use Indeed's
0:43
sponsored jobs to hire top talent
0:46
fast. And even better, you only
0:48
pay for results. There's no
0:50
need to wait. Speed up
0:52
your hiring with a $75 sponsored
0:54
job credit at Indeed .com slash
0:56
podcast. Terms and conditions apply. This
1:00
is AI Inside Episode 65,
1:02
recorded Wednesday, April 23rd,
1:04
2025, the era of
1:07
experience. This
1:10
episode of AI Inside is made possible
1:12
by our wonderful patrons at patreon .com slash
1:14
AI Inside show. If you like what
1:16
you hear, head on over and support
1:18
us directly and thank you for making
1:20
independent podcasting possible. Hello
1:27
everybody and welcome to another episode of AI
1:29
inside the podcast where we take a
1:32
look at the AI that is layered throughout
1:34
so much of the world of technology
1:36
and beyond. I'm one of your hosts, Jason
1:38
Howell, joined as always by Jeff Jarvis. I
1:41
can hear you now. How are you
1:43
doing, Jeff? Hello there. Sorry about that. I
1:45
have, Jason is like in five places in
1:47
my life right now. I'm on YouTube. I'm
1:49
watching stream on Twitter. There's Jason.
1:51
Jason's everywhere. You're checking all
1:53
the things. I know when I cut to
1:56
you, I heard myself actually in the background. I'm
1:58
like, oh, is that me now or
2:00
me then? Me in the past? It's
2:02
the matrix. Yeah, it is
2:04
the matrix and you know, I'm getting
2:06
comfortable living in the matrix. This is
2:08
just this is the year 2025 2025
2:10
is the year where we realized we
2:12
are living in a simulation Before we
2:14
get started huge. Thank you to those
2:17
of you who support us on our
2:19
patreon That's patron of the week Dan
2:21
merchant is one of the the amazing
2:23
patrons that support us each and every
2:25
week Just go to patreon .com slash
2:27
a I inside show if you want
2:29
to take part in that we really
2:31
do appreciate it and And again, just
2:33
a quick call out, leave us a
2:35
review or or a star rating on
2:38
your podcatcher of choice. But the reviews
2:40
are really helpful. We have gotten
2:42
at least one that I've seen in the
2:44
last week that that is more current. And
2:46
that's kind of what I'm hoping for is
2:48
to kind of freshen up the reviews and
2:50
get some newer reviews. So even if you
2:52
have an older one, renew it, you know,
2:54
refresh it, whatever. We really appreciate it. But
2:56
let's jump in because this is a news
2:58
episode. We got a lot of news to
3:00
talk about and. I think
3:02
maybe we start with
3:04
Google, and I say Google
3:06
ahead of OpenAI, even though
3:08
OpenAI is at the start of
3:10
this news story as well.
3:12
But Google's antitrust trial. is,
3:15
would you call it the punishments phase?
3:17
I don't know. It's kind of like,
3:20
it's kind of like, how can we
3:22
punish you Google shed phase? Yeah. Yes,
3:24
exactly. The company was found
3:26
guilty of its, well, found
3:28
guilty of unlawful practices and online
3:30
search and advertising in the
3:32
US and As a result,
3:34
the Department of Justice has
3:37
recommended, I kind of feel like
3:39
it's not going to happen, but has recommended that
3:41
Google, you know, if we
3:43
get our way, Google is going to
3:45
have to unload Chrome. They're going to
3:47
have to find a new owner for
3:49
Chrome. And it sounds like OpenAI has
3:51
an executive that testified at the trial,
3:53
hey, we'd be interested. We'd be down.
3:55
That's Nick Turley, head of product who
3:58
testified that the company would be interested
4:00
in owning Chrome. Just saying, if yourself.
4:02
we'd certainly be interested in owning it.
4:04
What do you think about that, Jeff? It's
4:07
a stunt. It's just like perplexity saying they're
4:09
going to buy TikTok. It's now
4:11
the way to punish everybody is to make them
4:13
sell something, and then everybody jumps up and says,
4:15
oh, buy it. And it's a kind of ridiculous
4:17
story cycle that we're in now. And
4:19
open AI, the
4:21
AI, I trust Google with Chrome
4:23
a lot more than I would trust
4:26
open AI, period. B,
4:28
I think it's a stunt. I think it's just for
4:30
the, what, but it worked. But we'll see all over.
4:32
We're doing it right now. But we're doing that because
4:34
it was in the news all over. And
4:37
so that's where it is. And
4:40
see, I wonder what the real
4:42
value is to open AI. Sure,
4:44
it could insert itself in that browser,
4:46
but hello, antitrust. It's the same problem that
4:49
it tries to fix. Then it's even
4:51
worse because the company will use it for
4:53
its own purposes and not allow others
4:55
in. Google has always allowed others in. But
4:58
that's a 20 -year problem, Jeff. That's 20
5:01
years from now when they finally say,
5:03
oh, actually, that was a bad idea 20
5:05
years ago. Yeah, well, just like Microsoft
5:07
and the browser in the past. My browsers
5:09
have been such a focal point, because
5:11
I think our main
5:13
entry. I remember many, many
5:15
years ago, at the beginning of
5:17
the web, my son, we
5:20
ran focus groups when I
5:22
worked for advanced publications
5:24
in Cleveland. The
5:27
people in the focus group said,
5:29
you know, there's this amazing thing on
5:31
this online. It just has
5:33
everything. It has the weather.
5:35
It has sports. It has
5:37
news. It has fun. What's that?
5:40
It's called Netscape. And,
5:43
you know, we've seen for a long time,
5:45
people don't understand the brands that underlie the
5:47
web and everything else. I think that's probably
5:49
less the case now. But
5:51
I think there's this kind of
5:53
naive view among both regulators and
5:55
media that the browser is everything. Whether
5:59
it was Microsoft or whether it's now Chrome. So
6:03
it seems like a kind of a
6:05
silly moment in all of this. And
6:07
it's serious stuff going on with Google. Absolutely.
6:09
I see that Chris and the questions asked whether
6:11
Google could make another browser. I don't know
6:13
because there is no decision yet whether this is
6:16
actually going to be the path that was
6:18
a recommendation. And I don't
6:20
know what limitations there might be. But
6:22
once again, I would trust Google more
6:24
than I would trust OpenAI. And
6:27
what does it do to all the rest
6:29
of the services? The browser
6:31
is key to everything we use,
6:33
to email and docs and
6:36
drive. And
6:38
maybe not maps, translate,
6:41
all these services we use are out of this
6:43
hub of the browser. And if you try to
6:45
split that off, it's like saying the
6:47
phone company can own the handset, but
6:49
somebody else has to own the phone. For
6:52
those of you who remember the old days where those
6:54
were two different parts, I'm sorry, I just dated myself. And
6:57
there was a wire that went into the
6:59
wall. Yeah, and curly cord on it, too.
7:01
For you kids, I'll show you in my
7:03
Jeff's Museum later. Well,
7:06
I think what you were just
7:08
talking about kind of illustrates both
7:10
sides of it. You know, you're
7:12
kind of saying... browser, like it's
7:14
it's long been seen as this
7:16
very, very important thing. But I
7:18
don't know that that's really, you
7:20
know, maybe the case anymore.
7:22
Maybe I'm maybe I'm getting, you
7:24
know, your words mixed up a
7:26
little bit. But but also it
7:28
is really important. And we do
7:31
channel and funnel so much through
7:33
it. And so I can see
7:35
why a company like Open AI
7:37
might love to have Chrome as
7:39
their kind of anchor for, you
7:41
know, especially when we're talking about
7:43
the agentic AI ambitions, you know,
7:45
to have a browser that you
7:47
can just then completely connect your
7:50
AI service and all of those
7:52
agentic qualities directly into, you know,
7:54
Perplexity has its comet browser that
7:56
it's doing this. I think Perplexity
7:58
would be another, you know, kind
8:00
of interesting. party to to
8:02
want to own it. I don't think
8:04
that they've explicitly said, hey, we'd be interested,
8:06
but I wouldn't be surprised if they
8:09
do. And nor do
8:11
I think any of it matters, because I think at
8:13
the end of the day, Google's not going to have
8:15
to sell Chrome. That's my hunch. That
8:17
would be like, you know,
8:19
to your point, the the kind
8:21
of tangled web of everything
8:24
that is connected. that would
8:26
interplay in that move just seems that
8:28
seems like a lot and I know
8:30
that the DOJ in a case like
8:32
this at least my my recent understanding
8:34
of this is that they shoot for
8:36
the moon and often they end up
8:38
somewhere in between where it was. and,
8:41
you know, where they're shooting for. And
8:43
I don't think that that place in
8:45
between necessarily means Google has to get
8:47
rid of Chrome. But yeah,
8:49
an entirely different DOJ than it was
8:51
when this case was brought. So yeah,
8:53
that's true run. They're continuing on the
8:55
theme that they had of Google bad,
8:57
but we'll see. But, you know, you
8:59
raise two other other points that I
9:01
think are really interesting, Jason. And
9:04
one argues with the
9:06
other. The one is
9:09
that. There's finally
9:11
competition in browsers, right? Perplexity is going
9:13
to create a browser. They
9:15
have reason to do so. And if
9:17
open AI is hunger enough for browsers, it
9:19
could make one. It wouldn't cost them
9:21
much at going to. It's
9:23
trivial. So on the one hand, just
9:26
as the argument is that this is
9:28
anti -competitive and we have to pull
9:30
it away from Google, there's competition. The
9:33
other contrary argument is that we
9:35
talk a lot about whether a
9:37
generative AI replaces search. What
9:40
if it replaces the browser? What
9:42
if agentic AI replaces the browser
9:44
in essence? It makes the browser
9:46
far less important because your pathway
9:48
to applications and to information and
9:50
to functions is going to be
9:52
otherwise. It's going to be through
9:54
command structures, new command structures, voice,
9:56
and so on and so forth.
9:59
Whether that happens or not, we can
10:01
predict, you know, till the cows come
10:03
home. Yeah. But the idea that the
10:05
browser, it's exactly the same as the
10:07
Microsoft fight. The browser was the key
10:09
to everything. And then it wasn't for
10:11
Microsoft. They lost the browser war. There
10:13
was competition, and there's competition still. So
10:16
yeah, I want to agree with you that
10:18
I don't think they'll be forced to do this.
10:21
These days, I can't predict anything.
10:23
Yeah. But the other thing that
10:25
obviously bothers me is I'm a
10:27
Chromebook guy. Right.
10:30
And yeah, I mean, to
10:32
have to sell Chrome. Apart
10:35
from is that the browser alone? Is
10:37
that the OS? What
10:39
does that really mean? Yeah,
10:41
good question. Yeah,
10:43
go ahead. Sorry. No,
10:46
no, no, no, I want to hear what you had say.
10:48
Just one other thing is the other day just occurred to
10:50
me. When Google originally used
10:52
to go to Google dot com and there
10:54
was the blank on the page and you type
10:56
that in, right? And when Google went to
10:58
the, what's it called Jason, the one bar or
11:00
the one, whatever. Yeah. What is. Yes, the
11:02
address bar became everything, right? Mm -hmm. It was
11:04
actually it was confusing. It confused me for about
11:06
a week. Well, was that a dress bar?
11:08
Is that where I go to put things in?
11:11
And when I would go to the course
11:13
to the Google search page, it would go ahead
11:15
and put it up into there to train
11:17
me. Yeah. And say, you're doing everything. Omni box.
11:19
Thank you. Omni box. Well, so
11:21
the omni box, the browser is not
11:23
just a browser. The omni box is
11:25
the path to. all kinds
11:27
of functionality. So
11:29
anyway, the browser is a
11:31
fungible beast now. When's the last
11:34
time I went to a Google search
11:36
page and clicked on the search
11:38
area in the middle of the page
11:40
and put my search in there?
11:42
Like, it happens very rarely and randomly.
11:44
And I couldn't even tell you
11:47
what is the circumstance that takes me
11:49
there. But everything that I do
11:51
is in that omni box. I mean,
11:53
and what you're saying also really
11:55
reminds me of the conversation that I
11:57
had at Mobile World Congress with
11:59
Google's Android head, Samir Samat. When
12:02
we were talking about
12:04
a post -app world where
12:06
agentic AI becomes so prevalent,
12:09
is there a need for apps? Is
12:11
there a need for applications when
12:13
agentics AI can just kind of go
12:15
to the places it needs to
12:17
go to do the things? And
12:19
I think Sumir's point was, was
12:21
also, you know, appropriate and spot on,
12:23
which is that even even in
12:25
that world, there is still a need
12:27
for companies, for brands, for destinations
12:29
to have some sort of a kind
12:31
of a place to go or
12:34
a, you know, maybe they've got a
12:36
brand that they want to convey.
12:38
And that's, that's how you do it.
12:40
Like the agentic AI can do
12:42
those things. But it might not necessarily
12:44
mean that we don't have those
12:46
other things as well, because they also
12:49
serve other purposes too. So So
12:51
I don't know. Um, I
12:53
do see if, you know, obviously
12:56
this was a, this was a
12:58
publicity stunt, uh, on open AI's,
13:00
you know, side to
13:02
gain more of the oxygen from
13:04
the PR room, which it's
13:06
very good at doing. But
13:08
no matter what open AI guaranteed
13:10
going to do the browser thing, you
13:12
know, I guarantee you they're working
13:15
on it behind the scenes already. It's
13:17
going to be trivial. In fact,
13:19
in fact. tell the browser, tell the
13:21
chat to make the browser software. I
13:24
mean, it's fairly trivial, I
13:26
think. Yeah, and
13:28
I guess what's coming to me
13:30
right now also is that
13:32
there's some overlap here between how
13:34
necessary is Is it to
13:36
AI to have its own browser
13:38
in the same way that
13:41
how necessary is it to AI
13:43
to have its own piece of
13:45
hardware like a rabbit are
13:47
one or whatever? You know,
13:49
it's everybody's looking for these different ways
13:51
to make it. I don't
13:53
know, make it more of an
13:55
immediate utility and it seems to be
13:57
doing all right in the form
14:00
that it is right now. But, you
14:02
know, we should pay tribute to
14:04
the browser was a paradigm paradigmatic shift.
14:07
I was working at Delphi for
14:09
one horrible month, which was Delphi
14:11
Internet before I got the hell
14:13
out. And they were going to
14:15
have a GUI because everybody had the
14:18
GUI, right? There was AOL and Prodigy
14:20
and so on. And you had to
14:22
have your graphical user interface. And
14:24
then along, I remember the day when somebody came
14:26
in and said, you got to see this thing.
14:29
And it was the browser, the
14:31
first crude blue browser. Uh,
14:33
but it, it immediately said that changes everything.
14:35
That's a pathway to famous. And that's
14:38
what a browser is. It's not a program
14:40
in and of itself. It's just a
14:42
way to get to stuff. Interesting
14:46
stuff there. Um,
14:48
and then we were talking a little
14:51
bit, you know, I mentioned perplexity, perplexity
14:53
comes up a lot these days on
14:55
this show. Uh, I think this is
14:57
interesting. Perplexity, uh, is working on some
14:59
big deals right now. One of them
15:01
we might actually. hear more
15:03
about tomorrow with Motorola. This
15:05
was a deal that I
15:07
guess Motorola has an event
15:09
tomorrow. It's expected that the
15:11
event is going to be
15:13
about their new Razer phone
15:15
and according to Bloomberg, probably
15:17
going to get some information
15:20
around a deal that Motorola
15:22
has struck with Perplexity to
15:24
have its Perplexity agent. pre -loaded
15:26
on Motorola devices, Gemini,
15:28
I'm guessing would still be present
15:30
on the device. I don't think
15:32
that this is necessarily saying that
15:34
Motorola would not have Gemini installed,
15:36
you know, and Google's Gemini is
15:38
out and Perplexity is in, but
15:41
I think this brings the option
15:43
of another AI Assistant or at
15:45
least the app onto the device.
15:48
And then, And then there's Samsung,
15:50
which apparently is in early talks
15:52
with Perplexity as well. Right now,
15:54
I think according to the case
15:56
that we were just talking about,
15:58
it was revealed that Samsung has
16:00
a two -year kind of licensing deal
16:02
with Google. Samsung's been working really
16:04
closely with Google on a lot
16:07
of things, and one of them
16:09
is bringing Gemini into Samsung phones.
16:11
But it turns out Perplexity is
16:13
talking and potentially making deals with
16:15
Samsung to bring the Perplexity agent
16:17
onto Samsung devices potentially in place
16:19
of Gemini. And as all of
16:21
these court cases happen and start
16:23
to kind of strip apart the
16:25
status quo of how Google does
16:27
its business and strikes as deals,
16:30
this could be something that we
16:32
see more of in the next
16:34
couple of years. So
16:36
back to the prior conversation, I
16:38
didn't expect this to be tied in, but it is.
16:41
Let me ask you a question, because
16:43
you're a pro phone user. Right. You
16:45
study phones and how they operate and,
16:47
and your use of them is critical
16:49
to your research, right? Um,
16:53
when you think of doing something on
16:55
your phone, what proportion of the time
16:57
do you, I think these are three
16:59
choices. Do you go to directly to
17:01
an app? Do you go to
17:03
the browser? Do you go to the assistant?
17:07
Oh, that's a really great question.
17:09
Uh, trying to break it
17:11
down. I mean, I
17:13
probably who that's a that's
17:15
a fantastic question. I don't.
17:17
OK, I'll start with assistant.
17:19
I don't use the kind
17:21
of baked in shortcut for
17:23
assistant very often. Right. I
17:26
mean, and I and I
17:28
that's really slowed down. I think that
17:30
was different when Google assistant was more. leveraged
17:33
and kind of a little bit more dependable
17:35
and new because it was the new thing and
17:37
wanted to get in the habit. And I
17:39
certainly had that habit for a while. I
17:41
don't use it as much and I
17:43
definitely don't use it as much with
17:45
Gemini. It's not something that I go
17:47
to regularly, ever once in a while
17:49
I do. I'm almost more inclined to
17:51
launch the app, honestly, to do that,
17:53
either with Gemini or with Perplexity, which
17:55
I do launch Perplexity and it does
17:57
have the voice assistant capability inside the
17:59
app. And so I will sometimes launch
18:01
that. Am I going to like I
18:03
could probably open up my pixel and,
18:05
you know, in the settings and assign
18:07
the perplexity voice agent as my main
18:10
agent, but I choose not to. I
18:12
couldn't tell you why. I think it's
18:14
partly reason is because Gemini is kind
18:16
of tied on a little bit deeper
18:18
level to the Android operating system. Like
18:20
perplexity's voice agent is great for search.
18:22
could be called and I trust trial
18:24
for just for that comment. But
18:27
it's true, you know, this is. This
18:29
has been Google's big strategy for better
18:31
or for worse. And I think it's
18:33
starting to bite them in the butt.
18:35
It is the intertangled web. At the
18:38
same time, I'm a consumer and I
18:40
want those conveniences. So
18:42
that's part of the reason why I
18:44
don't remove Gemini from that placement
18:46
and put Proplexity in its place. And
18:48
then how often do I go
18:50
to the browser? Almost
18:55
always, if I go to the browser,
18:57
it's because I used the Google
18:59
search on this, you know, that's on the
19:01
home screen to like ask a question of
19:03
something or to, you know, if I, if
19:05
I really need to go directly to a
19:07
website, I guess I just put it in
19:10
there. Um, I don't know. It's hard to,
19:12
it's hard to give concrete answers on that.
19:14
I don't know how to answer that other
19:16
than when I need to. So,
19:18
so I'm an old fort. So I mean, they
19:20
go to the browser. Okay.
19:22
are certain apps that I use, obviously, you
19:24
know, the weather app or whatever. But I tend
19:26
to, my reflex is what I'm saying, is
19:29
I want to get somewhere, I want to look
19:31
up something, I want to do that. I
19:33
go to the browser. If
19:35
I'm using the app, it's
19:37
really more voice search. You
19:40
know, I'm sitting at the dinner table and
19:42
how old is Brook Shields now? I don't know.
19:44
I'll ask, you know, hey, gee, how old
19:46
is Brook Shields now? That's really not AI. It's
19:49
really not the agent, I don't think. It's
19:51
more just nice. That's more of, yeah,
19:53
just voice search. Yeah, it really is. So
19:55
I'm not using the agents much at all.
19:57
The reason I ask that, the answer is,
20:00
what's the bet? If you're trying to deal with Samsung, what's
20:04
the bet of what the user
20:06
use basis is going to be
20:08
of these things? Yeah.
20:10
Of a default agent versus a
20:12
default browser. I
20:15
don't know. But again, it goes back to
20:17
our prior discussion. the browser may
20:19
not be the, for me, it's
20:21
still the hinge point of everything. For you,
20:23
it's not. No, no,
20:25
definitely not. The browser is definitely
20:27
not the hinge point for everything, but
20:29
I've got my feet in all
20:31
different pawns, I suppose. But
20:33
I do think, and we
20:35
talked about this a little bit
20:37
last night on the Android
20:39
Faithful podcast as well, about
20:42
like having a perplexity as
20:44
a voice assistant tied into
20:46
my phone by default, does
20:48
eliminate, like I said, some
20:50
of those deeper kind of
20:52
connections to say Android operations
20:54
or settings or some of
20:56
Google's kind of, you know,
20:59
kind of the special sauce
21:01
that Google has integrated with
21:03
their services into Gemini and
21:05
stuff. So having perplexity
21:07
in that spot eliminates or
21:09
reduces some of that functionality.
21:12
But as we know in
21:14
AI, everybody kind of has
21:16
their favorites or they have
21:18
their AI models that they
21:20
turn to for very specific
21:22
certain things and their go -tos.
21:25
And so that might not matter as much.
21:27
It might not matter to every user
21:29
the fact that when they use their voice
21:31
assistant, it can't. Know what
21:33
to do with turn the lights on
21:35
in the living room or whatever
21:38
where Google's can It might just matter
21:40
that like 90 % of the time
21:42
when I want to use voice
21:44
AI, you know search It's because I'm
21:46
researching or it's because I'm doing
21:48
this thing and therefore I Want that
21:50
to be assigned to something that
21:52
has a different skill set that's more
21:54
tailored to what I actually need
21:57
not You know doing what Google thinks
21:59
I need. Yeah. Yeah. Yeah, well That's
22:02
a really good point is
22:04
that the, the, the assistant aka
22:06
agent will be charged with
22:08
having more knowledge about you. It
22:10
will be more personalized, necessarily. That'll
22:14
probably be killer, the killer, um, not
22:16
killer apple, I'm trying to say the, the,
22:18
the characteristics that makes it win. Right.
22:20
It also has a lot of stickiness to
22:23
it. Open AI is really working hard
22:25
on this right now. You know, they're really
22:27
kind of doubling down and opening things
22:29
up as far as memory for users. And
22:31
turns out that becomes really, really
22:34
handy and helpful over time as
22:36
the model learns what what you're
22:38
constantly looking for when you ask
22:40
it to do this certain thing,
22:42
that means as a user that's
22:44
less hoops, I have the jump
22:46
through to get the answers that
22:48
I'm looking for. And I think
22:50
this is just kind of an
22:52
interesting fact is that the more
22:54
we give into these models and
22:56
get that memory, the more
22:58
sticky those models become because why would I
23:00
want to pick up? pick, you know, take
23:02
my toys and go over there. When this
23:04
one already knows so much about me, I'd
23:06
have to start over again going over there.
23:08
And I think our phones are going to
23:10
get there too. And that'll be really interesting
23:12
from, from that perspective. Yeah.
23:15
And I can already hear replaying
23:17
some of the, the, the battles
23:19
of your one is obviously privacy
23:21
knows too much about you. How
23:23
do you, you know, what control
23:26
do you have? And the other
23:28
is the filter bubble argument will
23:30
resurface. And the
23:32
filter bubble argument was made by Elaine
23:34
Pariser. And then Axel Bruns wrote a
23:36
book called Our Filter Bubble's Real, in
23:38
which he had lots of research said,
23:40
no, they're not, that Google was not,
23:42
in fact, personalizing to the level that
23:45
was presumed, was not putting you in a
23:47
filter bubble, but an agent will. So
23:49
what was worried about in a
23:52
moral panic past will come back
23:54
perhaps with some cause. Yeah.
23:57
Yeah. Cool. Well,
23:59
we've got a whole lot more to
24:01
talk about. We're going to take a super
24:03
quick break. And then we'll talk a
24:05
little bit about Demis Hasabis' interview on 60
24:07
Minutes. That's coming up in a second. Trust
24:12
isn't just earned. It's demanded. And
24:14
whether a startup founder navigating your
24:16
first audit or a seasoned security
24:18
professional scaling your GRC program, proving
24:20
your commitment to security has never
24:23
been more critical. or more complex.
24:25
That's where Vanta comes in. Businesses
24:27
use Vanta to establish trust
24:30
by automating compliance needs across over
24:32
35 frameworks like SOC2 and
24:34
ISO 27001, centralized security workflows, complete
24:36
questionnaires up to five times
24:38
faster and proactively manage vendor risk.
24:41
Vanta not only saves you
24:43
time, it can also save you
24:45
money. A new IDC white
24:47
paper found that Vanta customers achieve
24:49
$535 ,000 per year in benefits
24:52
and the platform for itself
24:54
in just three months. Join over
24:56
9 ,000 global companies like Atlassian,
24:58
Quora, and Factory who use
25:00
Vanta to manage risk and prove
25:03
security in real time. For
25:05
a limited time, our audience gets
25:07
$1 ,000 off Vanta at Vanta
25:09
.com slash AI inside. That's V
25:11
-A -N -T -A dot com slash
25:13
AI inside for $1 ,000 off. Let's
25:17
talk about something we don't talk
25:19
about enough. What happens to all the
25:21
data we share with AI platforms
25:23
like chat GP tier Claude? Every question
25:25
we ask every idea we brainstorm,
25:27
it's all being collected and tied back
25:29
to us as individuals. But then
25:32
what? Does it get sold to advertisers,
25:34
corporations, maybe even governments? We've
25:36
also grown accustomed to social media companies selling
25:39
our data over the last decade, and I'd like
25:41
to think that maybe we've learned a thing
25:43
or two so we don't make the same mistakes
25:45
again. That's why I've been
25:47
using Venice .ai, who's sponsoring today's
25:49
episode. Venice .ai is private and
25:51
permissionless using leading open source
25:53
models for text, code, and image
25:56
generation. And it's all running
25:58
directly in your browser so there's
26:00
no downloads, no installs. In
26:02
fact, your chats and history live
26:04
entirely inside your browser. They
26:06
don't even get stored on Venice's
26:08
servers. Their pro plan
26:11
is where things get really interesting
26:13
though. You can upload PDFs to
26:15
get insights and summaries. You get
26:17
a user controllable safe mode for
26:19
deactivating restrictions. on image generation. You
26:21
can customize how the AI interacts
26:23
by modifying its system prompt directly
26:25
and finally you get unlimited text
26:27
queries along with high image limits
26:29
that I couldn't even hit if
26:31
I tried. We talk often on
26:34
the podcast about the benefits of
26:36
open source AI, and that's exactly
26:38
what Venice dot AI is using.
26:40
If you care about privacy like
26:42
I do, or you just want
26:44
an uncensored and truly open AI
26:46
experience, Venice dot AI is worth
26:48
checking out. Go to my sponsor
26:50
link, Venice dot AI slash AI
26:52
inside, make sure to use the
26:54
code AI inside to enjoy private
26:56
uncensored AI. Use my code and
26:58
you'll get 20 % offer pro
27:00
plan. That's Venice dot AI. slash
27:02
AI inside with code AI inside
27:04
for 20 % off the pro plan.
27:06
And we thank Venice dot AI
27:08
for sponsoring the AI inside podcast.
27:13
Did you, uh, did you get a
27:15
chance to see Dennis's appearance on 60
27:17
minutes? No, I've been writing too much
27:19
about the head of 60 minutes quitting. Oh,
27:23
it was a different 60 -minute story
27:26
entirely. And I was not aware
27:28
of that, actually. Oh, yeah, I was
27:30
on CNN last night talking about it,
27:32
actually. Oh, no kidding. Wow. I'll
27:34
have to look that up. Well,
27:36
yes, Google's DeepMind CEO did make
27:39
an appearance on 60 Minutes over
27:41
the weekend. And I
27:43
think it's interesting because there were
27:45
a lot of, you know, fresh
27:47
off of our conversation with Yann
27:49
LeCun from Meta. Um, there were
27:51
definitely a couple of points throughout
27:53
the interview where it was like,
27:56
okay, I've heard that before. Yon
27:58
was talking about this too, particularly
28:00
the fact that, um, that Demis
28:02
was, you know, basically saying, uh,
28:04
you know, AGI as people. define
28:07
it very differently or don't
28:09
is at least 10 years
28:11
down the line. So Demis
28:13
and Jan appear to be
28:15
on the same kind of
28:17
timetable as far as when
28:19
they think this randomly defined
28:21
concept of artificial general intelligence
28:23
will actually happen. And it's
28:25
not immediate. It's definitely somewhere
28:27
down the line. But he
28:29
did make the prediction that
28:31
AI could potentially cure all
28:33
diseases within this next decade.
28:36
All diseases, that's kind
28:38
of crazy to think. I feel like
28:40
anytime you put all or nothing, there's
28:42
at least a small amount of invalidation
28:44
in my mind to something like that.
28:46
Because really, all of them, maybe
28:48
that's just an easy way to say most,
28:50
but just say most would be probably where
28:52
I would go with that. But yeah, I
28:54
don't know. What do you think about that
28:56
prediction? Even most is too far. It's
28:59
a turbocharged view
29:01
of technological solutionism. And
29:04
the argument is that the internet really
29:06
brought out solutionism thinking it's going to
29:08
solve everything and it's going to bring
29:11
peace and certainly has not. But
29:13
this is a whole other level
29:15
where it's part of the AGI
29:17
-ASI presumption. that we're going to
29:20
get there and it's going to
29:22
be so amazing. It can do
29:24
these things and it's being ascribed
29:26
to it with no reason, no
29:28
basis. Will it help with
29:30
medicine? Yes. Will it help
29:32
find different uses of molecules? Yes.
29:35
Will it do things behind the scenes
29:37
like protein folding? Yes. All that's
29:39
yes. All that's amazing enough. But this
29:41
just goes overboard in a ridiculous
29:43
way in my view. And I think
29:45
it's harmful in the long run
29:47
on two sides of it. puts
29:50
a target from his perspective, it's dangerous
29:52
because I think could put a target
29:55
on the back of the technology he's
29:57
building. Well,
29:59
it's going to fail. It's
30:01
not going to reach the heights that it's been predicted
30:03
to do. And on the other
30:05
hand, it makes it more fearsome. Oh,
30:07
it's all powerful. It's
30:09
God. It's not.
30:11
Let's take the wonder that we can have
30:13
with this on its level. Why does
30:15
it have to be everything? It's
30:18
it's irritating and I respect him and
30:20
I respect his work and he's a genius
30:22
at the stuff I'm not taking any
30:25
of that away. Just don't over sell it
30:27
man Yeah Hard hard not to I
30:29
suppose when you're that close to it and
30:31
you know work there Yeah, maybe maybe
30:33
they well they believe that they know something
30:35
that the rest of the world does
30:37
not nice That's that's part of the problem.
30:39
Yeah, is that I think I think
30:41
that sets up a distance that's that they
30:44
haven't learned what happened
30:46
in this
30:48
period of the
30:50
arc of
30:52
internet hype. The
30:55
internet hype was
30:57
hypey enough. This
30:59
is 10 times hypeier, wouldn't you agree?
31:02
Yeah. Yeah, I mean
31:04
I was a lot younger when that
31:06
when the internet was first, you know
31:08
coming around So and I certainly wasn't
31:10
as analytical at that time I was
31:12
probably caught up in the hype more
31:14
than anything because I was very excited
31:17
by But it but it feels that
31:19
way from my point of view now,
31:21
you know at the same time It's
31:23
really impressive. You know some of the
31:25
some of the accomplishments that have happened
31:27
here, right? Like he discusses deep minds
31:29
alpha fold mapped more than 200 million
31:31
protein structures in a single year and
31:33
If that was equated to the amount
31:35
of time it takes traditional researchers to
31:37
do their work prior to this, that
31:39
would have been one billion years of
31:41
traditional research time. And
31:43
you know, that's just amazing.
31:47
That's absolutely amazing. And
31:49
that gives the confidence to say, well, if
31:51
we're doing that now, then what are
31:53
we going to accomplish in the next 10
31:55
years? It's going to be, you know,
31:57
a million fold where we are right now.
32:00
Yeah, and presuming the hockey
32:02
stick is applicable to everything
32:04
in life. Because it presumes
32:06
the basis of if we do
32:08
this much now, then
32:11
you've given a definition of what this
32:13
is. And then you multiply it
32:15
by 100, you say, well, that's everything. No,
32:18
there's a lot of challenges in life. And
32:21
I'm glad the technology is. We're
32:23
both boosters of this to the
32:25
extent that it does the amazing
32:27
things. But the booster, the
32:30
high end boosterism just drives me nuts. Well,
32:34
well, perplexity CEO, Aravind Srinivas
32:36
agrees with, uh, Dennis called
32:38
him a genius after this,
32:40
uh, interview and says he,
32:42
he should be given all
32:44
the resources he needs to
32:46
realize this goal. So. That's,
32:49
by the way, perplexity entering the
32:51
conversation, it seems like more and
32:53
more right now. They're brilliant at
32:55
PR. They are brilliant. We're
32:57
open AI, obviously was brilliant because it took over
32:59
the world and it's gotten all this money
33:01
and so and so forth. But just in terms
33:03
of, and
33:06
perplexity is not as hypey.
33:09
Oddly enough, right? It doesn't, I don't hear the AGI
33:11
stuff quite as much from them. What I see is,
33:14
we can do this, we can do that. Oh, we're
33:16
going to enter into this conversation later in that conversation.
33:18
We're going to buy browsers. We're going to buy TikTok.
33:20
We're going to agree with our competitors. They
33:22
just sneak into stories. Just brilliant.
33:25
Seems, seems to be the case. Yeah.
33:27
Um, This next one you put
33:29
in there and I did not have
33:31
this on my radar and I
33:33
thought this would be a really interesting
33:36
conversation. The Trump administration considering a
33:38
draft executive order that would direct federal
33:40
agencies to integrate AI into K
33:42
through 12 education here in the US,
33:44
of course. It's in
33:46
a very early form at
33:48
this point, according to this
33:51
article in the Washington Post,
33:53
it would integrate AI into
33:55
teaching, also administration tasks, create
33:57
programs using AI
33:59
technologies with partnerships with
34:01
private companies and
34:04
nonprofits and schools to
34:06
create and promote
34:08
foundational AI literacy. And
34:11
yeah. Interesting.
34:14
I mean, this just seems to
34:16
go deep. And obviously, I
34:18
have not read the draft executive
34:20
order in its entirety. I've
34:22
just read this article to kind
34:24
of get a general sense
34:26
of what's going on here. And
34:29
I find myself a little
34:31
conflicted because, on one hand, I
34:33
think it's really important to
34:35
recognize this inflection point that we're
34:37
in right now with technology. and
34:40
to, you know, in many ways,
34:42
embrace it, get ahead, if not
34:44
ride that wave. On
34:46
the other hand, it feels
34:48
so sudden and drastic to
34:50
commit so quickly to the
34:53
level at which, you know,
34:55
this article seems to illustrate. All
34:58
of the work they used in the
35:00
Washington Post story, it is a pre
35:02
-decisional. just
35:04
a word I hadn't heard before. And I'm
35:06
sorry. concept of a plan. It's a concept of plan. I
35:08
have to do the joke here, because the joke is obvious. But
35:11
they're having, they're
35:13
instructs Education Secretary Lyndon McMahon to
35:15
prioritize federal grant funding for trained
35:17
teachers, blah, blah, blah. So she's
35:19
going to put A1 sauce in
35:22
our schools. You see
35:24
the story last week that she confused,
35:26
she kept on calling AI A1. And
35:28
so a one sauce had a
35:31
bonanza with that. And so we're
35:33
all going to pour a one
35:35
sauce over our students. Yes. I
35:38
mean, the obvious joke. But
35:40
this is, it's the problem with all these
35:42
executive orders. Is it with the stroke of
35:44
my Sharpie, I can change the world. And
35:46
Lord knows in some ways he's doing it.
35:49
But this is not that easy to just
35:51
say, we're going to put AI in
35:53
everything. And.
35:57
The irony here is while
35:59
and I'm trying not to
36:01
get overly political though my
36:03
views are fairly known While
36:05
they're cutting into education in
36:07
every other way possible, right? Well,
36:10
that's part of what feels so drastic,
36:12
right? It's like a one -hand Taking
36:14
acts to all this stuff on the
36:16
other hand. Let's replace it with AI
36:18
right And so
36:20
deeply, just based
36:23
on reading through this,
36:25
it feels like such
36:27
a deeply embedded kind
36:29
of solution. Obviously, they're
36:31
chasing down countries like
36:33
China who are pursuing you
36:36
know, integrating AI into their efforts in
36:38
education. And there's a big sentiment right
36:40
now in U .S. leadership that like,
36:42
well, we can't let China win the
36:44
AI game. We've got to win. And
36:46
so let's do do it by every
36:49
means necessary. And it's just, yeah, it's
36:51
such a response. It would be such
36:53
a response if it actually passed. Yeah.
36:55
And the fear, I think, is that
36:57
if you're a teacher, they're going
36:59
to come and say, well, yeah, can we just,
37:01
we just gave you 20 more students, but no
37:04
problem. You got AI. Right. Or
37:06
yeah, preparation's hard, curriculum's hard, but you got
37:08
AI now. So this makes your job easy. And
37:10
of course it doesn't, not at all. This
37:14
morning I watched something that's still going on
37:16
right now, William & Mary College. They did something
37:18
about education and AI. And
37:20
my friend Matthew Kirshenbaum, University
37:22
of Maryland, and Rita Rayleigh from
37:24
UC Santa Barbara had done
37:26
a piece in the Chronicle of
37:28
Higher Education about whether AI
37:30
will kind of ruin universities. And
37:33
the joke that today was, well, AI doesn't need
37:35
to, it's happening elsewhere, but
37:37
not a joke. But
37:39
there's concern in the
37:41
acronym at that level,
37:44
the university level, about
37:47
the relationship to
37:49
these big centralized companies,
37:52
about the resources that are provided
37:54
or not provided, about the
37:56
freedom that academics will have to do things
37:58
and whether they were talking about whether
38:00
they could run a model under the desk,
38:02
which in a way maybe you can
38:04
do with some of the stuff we're seeing.
38:10
And so there's big concerns at an
38:12
educational level about AI all around. Nobody
38:14
is saying it's not amazing. Nobody's
38:17
saying it's not a tool that we should
38:19
use. Nobody's saying we shouldn't teach our students.
38:21
But this presumption that, okay, I can
38:24
pour the A1 sauce, into
38:26
a syllabus and I'm done is
38:28
kind of ridiculous. But
38:30
there is a demand out there. So at Storybrook,
38:32
I wrote a syllabus for a course in AI and
38:34
creativity. And last I
38:36
knew a week ago, it already had 91 students
38:38
signed up. And
38:41
so there's a popular demand and
38:43
desire for this stuff. And so
38:45
I think that's great all around.
38:47
Just do it smartly. Don't do
38:49
it as if you think one
38:51
signature and it's done. That's all.
38:53
Yeah, reactively and and swiftly although
38:55
that's you know, that's proving to
38:58
be kind of a hallmark of
39:00
where we are right now is
39:02
is reactively and swiftly For better
39:04
or for worse. So yeah, like
39:06
I said, I'm a little conflicted
39:08
on this because I do what
39:10
I don't want is for the
39:12
US education to only see the
39:14
bad potential of
39:16
AI, you know, well, students are
39:18
going to learn to cheat, blah,
39:21
blah, blah. Like, I do believe
39:23
that AI and what it, you
39:25
know, the current state of LLM
39:27
and everything that it's developing into
39:29
through agentic and beyond, like,
39:31
I don't think this goes
39:33
away. And I don't think that
39:35
wishing it or pretending like
39:37
it doesn't exist does any good.
39:39
And I don't think that
39:41
the younger generations coming up necessarily
39:43
see it or will see
39:45
it that way either. They're going
39:47
to embrace it in a
39:49
way that we older people are
39:51
not going to have as
39:54
easy a time doing because it's
39:56
not our normal. But
39:58
it's their normal. And so, you
40:00
know, so there is a
40:02
need to kind of embrace and
40:04
kind of lean into that
40:06
education piece. Just please do it
40:08
in a responsible way that
40:10
doesn't throw out a lot of
40:12
other goods and involves the
40:14
community and how it's done. Yeah.
40:16
And not just say, you're
40:18
not doing enough AI. Right.
40:20
Do more AI. We need more.
40:23
Everybody needs an open AI subscription.
40:25
There we go. We've done it. Now do
40:27
all your work on open AI. Okay. Perfect. We've
40:29
done it. We've done the AI thing. Yeah.
40:34
Oh, that's one way to do it. We'll
40:36
see. Let's
40:39
talk a little bit about AI generations
40:41
because I thought this article, another one
40:43
that you put in here actually was,
40:45
that was, I don't know,
40:47
I appreciated reading through it. I'm having
40:49
a hard time pulling it up here.
40:51
But if you go to archived today,
40:53
you can go to office. Yeah. Not
40:56
subscribing to Business Insider. Yeah.
40:59
Well, the problem is I try and pull up.
41:01
I try and pull up the archive links on
41:03
Chrome. And for whatever reason, it never works for
41:05
me. Really? I have to load it in an
41:07
entirely different browser in order for it to work.
41:09
Anyway, that's weird. That's a
41:11
little behind the scenes. But you had
41:13
put in this this article that talks
41:15
a little bit about AI eras, like
41:18
the fact that like, you
41:20
know, not too long ago, we were in
41:22
the simulation era, which is kind of
41:24
the alpha go. era where
41:26
models were learning through repeated and
41:28
digital simulations and reinforcement learning and
41:30
there was all the alpha alpha
41:32
go you know and play in
41:34
the game and and whoa can
41:37
you believe that the game is
41:39
capable of playing this so quickly
41:41
and and dominating and everything that
41:43
was the beginning then there was
41:45
the human or rather is the
41:47
human data era where we are
41:49
right now dominated by internet scale
41:52
data transformer models of course and
41:54
where we reside right now.
41:56
And then Google researchers David Silver
41:58
and Richard Sutton have proposed,
42:00
according to this Business Insider article,
42:03
a major shift in AI development
42:05
with a concept called the
42:07
era of experience. And yeah,
42:10
tell me a little bit about the era
42:12
of experience and what they say. So
42:14
yeah, I thought this was interesting. And
42:18
by the way, this paper is going
42:20
to be part of a book that called
42:22
Designing and Intelligence from MIT Press. So
42:24
it's a preprint from Silver
42:26
and Sutton. And
42:28
I agree with where this goes.
42:30
The funny thing was, it
42:32
repeats what John McClellan told us.
42:36
Yeah. Right. So it's credit is
42:38
given to Google. And that's nice because
42:40
they don't get much credit in the AI
42:42
world as much as they want. But
42:44
this isn't just Google saying this. Uh, what
42:46
we, it's, it's, uh, Jensen Wong, Yonla
42:48
Kun, Google are all
42:50
saying that the next phase
42:53
has to be experience
42:55
to teach AI reality. Uh,
42:58
and that's where you're really headed. And
43:00
it's going to happen. World models. Yeah.
43:02
Yeah. It's going to happen through, uh,
43:04
robotics and it's going to happen through
43:06
digital twins and it's going to happen
43:09
through, um, data gathering
43:11
through glasses and all that kind of stuff, but
43:13
it's got to have some sense of cause
43:15
and effect. And it doesn't have
43:17
that yet. It doesn't know that. Um,
43:19
so that's going to be really
43:21
interesting. So I think, I think that
43:23
the point of the paper is
43:25
good. Business Insider does kind of a
43:27
simplistic view that Google told the
43:29
world, what for? No. Yeah. Right. This
43:32
is where everybody's going. Um,
43:34
and I think we're waiting for that. No, I
43:36
don't even want to say leap. I
43:38
think it's just a, I'm going
43:40
to use the word paradigm again. You
43:42
know, when I worked at Delphi way back
43:44
when, they had a $5 paradigm jar.
43:46
If you use the word paradigm, you had
43:48
to put $5, not just $5 in
43:50
it. It was that much of a word.
43:52
It's an easy word to lean into.
43:54
I am so guilty of that. In the
43:56
new paradigm, I've had to try and
43:58
back off of that word. So there will
44:00
be, I think, a paradigm shift. Oh,
44:02
it's 15 bucks already. uh,
44:06
this, this experiential layer, but I don't think
44:09
we've seen it yet. Apart
44:11
from robots, obviously learning some things, but
44:13
in ways we can't touch because
44:15
we ain't the robot or digital twin
44:17
factories, but we don't touch it
44:19
because we're not seeing what those alternative
44:21
futures are or anything like that.
44:23
I don't think we've seen a consumer
44:25
level version of experience yet. And
44:28
we're, oh, it understands that
44:30
the egg drops, it cracks.
44:33
Right. Right. Right. And so
44:35
I think that's what I'm
44:37
kind of waiting for is
44:39
the application layer of experience
44:42
learning. And it
44:44
could be a ways away. And
44:47
it's not going to be
44:49
like generative AI, because I
44:51
don't think a token -based
44:53
world, this I'm getting way
44:55
out of my depth here, way out, folks. But
44:58
I think this is part of what Yalla Kun told us in
45:00
the wonderful interview, which if you haven't seen it yet, Jason will
45:02
give you the link in a second. is
45:06
that when you're just dealing with
45:08
this abstraction of tokens, there's
45:11
no meaning. Well,
45:13
reality has meaning in so far as that's an
45:15
egg and this is what its properties are and
45:17
this is what can happen to it. And
45:19
it has to associate it with
45:21
that concept of egg. That's
45:25
not the case in generative AI. It's not
45:27
the case in machine learning as it stands
45:29
now. It will
45:31
be in robotics. Right? Hand
45:33
has to say, if it's an egg, don't
45:35
push too hard, because it'll break. Right.
45:37
You push too hard, it breaks. Right. So
45:39
I won't do that again. I've just
45:42
learned that about the egg or whatever that,
45:44
however it abstracts that notion of egg,
45:46
you know, spheroid weight
45:48
thing. And so
45:50
this is a little fascinating to me. I
45:52
just love this next part of it, but
45:54
I don't know when it's going to get
45:56
to our actual attention past theory. Yeah.
45:59
Yeah. Well, Yeah,
46:01
and I think one one thing that
46:03
was kind of interesting to me that
46:05
I mean is is probably just a
46:07
different way of explaining what you were
46:09
just talking about is that the current
46:12
Era that we are in you know
46:14
We often talk about data scarcity about
46:16
the fact that these models are so
46:18
hungry and they just need so much
46:20
information to get smarter and smarter But
46:22
yet at the same time we've almost
46:24
fed it Almost everything we can at
46:26
this point the only way that they
46:28
get better be you know leaps and
46:30
bounds better into a
46:32
kind of a new paradigm, as
46:34
you put it, is by
46:37
learning these skills and these limitations
46:39
themselves beyond just the information
46:41
that they've been fed, being able
46:43
to kind of, you know,
46:45
interact with the world and learn
46:47
by doing in the real
46:50
world and encountering things that you
46:52
know, the, the written word
46:54
that travels over the internet doesn't
46:56
even, it doesn't even describe
46:58
properly for a system like this
47:00
to truly understand it. Maybe,
47:02
maybe it understands it conceptually, but
47:05
it doesn't understand it from
47:07
a lived felt sense, let's say,
47:09
which is probably the wrong
47:11
way to put it for a
47:13
machine, you know, lived. Well,
47:15
well, right. Right. Exactly. Yeah. You'll
47:18
even learning is a troublesome
47:20
word. Yeah. But this paper at
47:22
the end of it. emphasizes
47:24
mainly, not robotics, but agents.
47:27
Right. And it says that that's
47:30
what in everyday life, personalized
47:32
assistance will leverage consistent, continuous, rather,
47:34
streams of experience to adapt
47:36
to individuals, health, education, professional needs,
47:38
and long -term goals. Perhaps most
47:40
transformative will be the acceleration
47:42
of scientific discovery. AI agents will
47:44
autonomously design and conduct experiments. That's
47:47
an interesting word there. It's constantly experimenting. What
47:49
if I do this? What if I do that?
47:51
Well, then it has to be able to
47:53
try again. Right. That kind of that's that's learning
47:55
in fields like material science, medicine, hardware design,
47:57
and so on. So
47:59
agents, what I hadn't
48:01
seen before is now I get
48:03
a better understanding, I think, of why
48:05
we see this rush to a
48:07
genetic AI. It's not just because it's
48:09
the next thing. It's not just because it
48:12
gives us cool things. It's the next
48:14
training. Mm hmm. Yes.
48:16
That the agents are the way
48:18
it learns. Totally. And so
48:20
it's a business model that every
48:22
agent you have will add
48:25
value to their machines, to their
48:27
larger models. Yeah,
48:29
it's probably a small representation of this,
48:31
but like the example that pops into
48:33
my head is, you know, we're
48:35
thinking about agents. that go
48:37
online like I need to buy those
48:39
plane tickets and agent go do that
48:41
and so it goes onto the site
48:44
and it goes through the standard you
48:46
know methods and learns the website and
48:48
everything but it encounters an issue that
48:50
it can't work around is the agent Does
48:53
the agent stop there? Or does
48:55
the agent, like humans often do,
48:59
stop pause and think like, okay,
49:01
what is a way around
49:03
this hurdle? How could I possibly
49:05
get to this from a
49:07
different perspective and work myself around
49:09
it? And maybe that's a
49:11
legitimate way and maybe it's an
49:14
illegitimate way. But from a
49:16
human perspective, we don't just like
49:18
start, oh, face a hurdle,
49:20
stop. We kind of
49:22
think around these things and
49:24
in so doing we teach
49:26
ourselves alternative pathways and alternative
49:28
ways to see and understand
49:30
the world when those pathways
49:32
work or when they don't. Yeah,
49:36
I don't know how that connects with this. Exactly.
49:38
It's just kind of what popped in
49:40
my head is, is like, I think when
49:42
I think of agents that do things
49:44
in the current paradigm, it's like, did you
49:46
buy the plane ticket? And I think
49:48
maybe the agents down the road, a very
49:50
easy challenge would be, no, I didn't,
49:53
but here's my, here's how I figured out
49:55
how to get around it and do
49:57
it in this alternative, you know, method or
49:59
way. So two things on
50:01
that. I love their discussions. This is, this is
50:03
fun because we go off into other things. So
50:05
I found a paper yesterday that I
50:08
didn't put up on the rundown because
50:10
it was relevant to us. So funny
50:12
that now I see, I'm trying to
50:14
see if I can find it in
50:16
my history, um, that explained
50:18
why it's so difficult
50:20
to search plane, um,
50:23
can't find it, uh, plane fares
50:25
and all of these charts, it
50:27
saw the level of how many
50:29
flights there are, possibilities there are
50:31
from one city to another. And
50:34
then all of these code
50:36
variants and fair variances and then
50:38
trying to compare them all.
50:40
So to then tell an agent,
50:42
it sounds like, oh, agent
50:44
makes plate reservation. Well,
50:46
it's incredibly complex. Right.
50:49
And, um, uh,
50:53
it's, it's, I think we're short handing all
50:55
these tasks in life right now as if,
50:57
well, yeah, the agent will do it, but
50:59
we go with our judgment. into
51:02
what is. The next thing you raised, which is
51:04
interesting, is when you hit the barrier, and
51:06
this happened when we talked to
51:09
folks about art in this, is when
51:11
it does the wrong thing, is
51:13
that good or bad? Is that a
51:15
lesson learned, as we were just
51:17
saying? Right, right. And I think it
51:19
was Rita Rayleigh from UCSB at
51:21
this event this morning at William &
51:23
Mary. I think
51:25
it was she who
51:28
said that the creativity
51:30
is leached out of
51:32
the models because they've put
51:34
they've modified it down
51:36
so there's no unpredictability. Because
51:39
unpredictability is where you get
51:41
to problems, hallucinations, all that kind
51:43
of stuff, right? So you've
51:45
got to leave in mistakes to
51:47
learn, right? So
51:50
you've got to tell it to go off and find the plane
51:53
ticket and it doesn't find the plane ticket and then it has
51:55
to that's that's that's part of the process. Learning is failing. And
51:58
it's really absolutely I mean
52:00
absolutely in the human experience so
52:02
much is so much is learned
52:05
through failure Even though it's incredibly
52:07
uncomfortable, but that's part of the
52:09
reason why you learned so much
52:11
from it. It's profound right and
52:13
Yeah, so that's that's necessary and
52:15
do we do we as
52:17
as humans who have created this
52:20
thing do we have the patience
52:22
for failure with these systems and It's
52:25
largely it seems like people
52:27
express that they don't because they
52:29
continue to harp on AI
52:31
systems that aren't 100 % information
52:33
accurate 100 % of the time.
52:35
They're just not going to be
52:38
that way. Same as humans.
52:40
Humans aren't either. We're
52:43
patient with humans because we realize it's part
52:45
of the human condition to be imperfect. But
52:47
we aren't with the machine. And, you know,
52:49
maybe we need to, maybe we need to give
52:51
the machine a little bit more grace than
52:53
we do right now. Well, I can't, if it
52:55
can't fail, it can't learn. If it can't
52:57
fail, it can't get that experience. And
52:59
so do we have that tolerance for that,
53:02
for that failure? Um, how do we build that
53:04
in? Cause I think
53:06
we have this, this, this idea of
53:08
the machine is a machine. So
53:10
it can't make mistakes. But.
53:15
Interesting stuff. Now this
53:17
next one. Oh, and you put in
53:19
another link here. Did you want to
53:21
talk about it? Only parenthetically is that
53:23
is that as a business insider gave
53:25
Google credit for this thing that we
53:27
just spent last time I was talking
53:29
about. Similarly, IEEE interestingly came in because
53:31
Google often is said to be behind
53:33
behind open AI behind others. IEEE came
53:36
in and said Google succeeds with LLMs
53:38
while meta and open AI stumble. That's
53:40
the first time I've really seen major
53:42
credit being given by somebody of as
53:44
much stature as IEEE, saying
53:47
that just talking about the model, just talking about
53:49
the performance, I don't really want to go into
53:51
any depth here, but it was interesting to see
53:53
a slight vibe shift there. Google's
53:55
getting some good juice here. There
53:58
you go. You get what you deserve.
54:00
go Google. You go Google. This
54:03
next one, oh boy,
54:05
got thoughts on this one.
54:07
A 21 -year -old former
54:10
Columbia University student. has
54:12
raised $5 .3 million in seed
54:14
funding for his startup called
54:16
Cluly. It's an AI tool
54:18
designed to help users secretly quote, cheat
54:21
on everything. So
54:24
exams, interviews, sales
54:26
calls, first dates
54:28
as shown by the verifiably
54:30
creepy promotional video that they shared
54:33
on X that I'm pretty
54:35
sure only incels will find appealing.
54:37
The app concept was born
54:39
out of founder Chungin Lee and
54:41
co -founder Neil Shanmugums. I'm sorry
54:43
if I mispronounced your name. Their
54:46
tool called interview coder that they
54:48
developed while studying at Columbia University.
54:50
Did they develop this for their
54:52
work at Columbia University or was
54:54
this on the side because They
54:57
were ultimately suspended from the university and
54:59
I couldn't figure out if this was something
55:01
I'm guessing a connection, but it's not
55:03
clear. Yeah, it's not clear.
55:06
But anyways, the app was designed
55:08
to allow users to cheat
55:10
undetected. were embroiled in disciplinary proceedings
55:12
at Columbia over the AI
55:14
tool. Right. And they
55:16
both dropped, have since dropped out. So...
55:19
So did they create the tool on
55:21
their own outside of the university or
55:23
was it something that they created and
55:25
began as a tool for developers to
55:27
cheat on knowledge of elite code platform
55:29
for coding questions that summon software engineering
55:31
circles, consider outdated and a waste of
55:33
time. So maybe it was their
55:35
way to just say, yeah, you know,
55:37
but, but this goes to the definite. What is cheating? Well,
55:40
yeah, is it cheating? He was
55:42
a calculator, right? And that's kind
55:44
of part of what they're saying. Right.
55:49
Story I tell in my, in my book
55:51
that no one bought called public parts. is
55:53
that Mark Zuckerberg, when he
55:55
was still in Harvard there, he had an art
55:57
class. And at
56:00
the end, the final of the class would
56:02
have to be writing things about all of
56:04
these pieces of art. And everybody knew that.
56:06
And so they would do study groups. And
56:08
so he organized a study group so
56:10
that everybody was sharing the best of this.
56:13
And the argument in the
56:15
book that Zuckerberg made was
56:18
that at the end, everybody
56:20
did better. by using
56:22
social, by not seeing it as
56:24
competitive, by collaborating, they
56:27
all learn more and he had to study last. But
56:31
he said that the grades for everyone in
56:33
the class went up. So was
56:35
that cheating or was that a
56:37
smart use of social collaborative thinking? Is
56:40
it cheating to use the technology or
56:42
is it a smart use of technology
56:44
as an aid to you? I think
56:46
we have to re -examine the notion
56:48
of cheating. What is cheating? mean
56:50
is that merely is
56:52
okay this is an interesting
56:54
question I just asked
56:56
myself um but I'll ask
56:58
you too is cheating
57:01
um being unfair is cheating
57:03
being um yeah right
57:05
what what constitutes cheating yeah
57:07
is cheating uh yeah
57:09
because I mean I think
57:11
when I think of
57:13
cheating in my older
57:15
kind of school time paradigm,
57:18
I think of this is a question
57:20
that wants to know my knowledge
57:22
of something. And instead of sharing my
57:24
knowledge of something, I'm sharing what I've
57:26
written down or what I've what
57:29
I'm reciting or regurgitating from this thing
57:31
in a in a moment where I
57:33
was expected to know it instead. But
57:35
now instead of knowing it. Right.
57:37
Right. But now now leave school. You
57:40
have a similar task. Right
57:42
is it if you get if
57:44
you get the answer you need is
57:46
that she by any means does
57:48
and and does it Does it matter
57:50
right if you're tasked with a
57:52
job and you're able to do the
57:54
job? Does it matter if you
57:56
knew the answer or if you sought
57:58
the answer right? Yeah,
58:00
now when it gets to dating that
58:02
is creepy because that it's is Cerro
58:05
de Bergerac. Am I really dating you
58:07
or am I dating the app? I
58:09
mean, that just felt like incredibly deceptive.
58:11
That promo video is the guy is
58:13
sitting at a table with a, you
58:15
know, I don't know if a blind
58:17
date or a first date with an
58:20
attractive woman, of course. And she's
58:22
asking him questions and then you
58:24
see his kind of like terminator view.
58:26
coming up of the AI kind of coming
58:28
up with the answers that he can
58:30
feed to her. So he's essentially cheating on
58:32
the questions that she's asking, lying about,
58:34
you know, being being untruthful or dishonest about
58:37
his age when she asks, this is,
58:39
well, you look kind of young. Are you
58:41
sure you're 29? And he's, you know,
58:43
he's being fed all this information. And then
58:45
when she decides to walk out, then
58:47
then like the AI kicks in to like
58:49
win her back. And so he recites
58:51
that from a very heartfelt place and almost
58:53
gets her to the point to where
58:55
she finally realizes I just need to get
58:57
out of here and leaves. And it
58:59
was just kind of like, I
59:01
don't know. I don't think that
59:03
does anything to endear me to what
59:06
you're talking about because I do
59:08
agree with what you're saying. Like there
59:10
was a time when calculators were
59:12
probably seen in the same perspective, spell
59:15
check. I mean,
59:17
for my preparation for these
59:19
shows, often I'm using AI
59:21
tools to research. which I
59:23
would have had to do manually and
59:25
by hand earlier, I would have to like
59:27
do a Google search and find the
59:29
stories and collect them, open them in many
59:31
windows, read through, pull information. Instead
59:33
of taking 20 minutes to do
59:35
that, I can take five
59:37
minutes or maybe even less and have
59:39
it pull back those things. And
59:42
so you could see that as cheating
59:44
for these shows, but it doesn't
59:46
mean that I don't synthesize the information
59:48
and do something I mean, these
59:50
shows are a prime example. Hopefully, you get
59:52
benefit and value out of it. And if you
59:54
do, then it's just an example that it
59:56
kind of doesn't matter. of you listening or watching,
59:58
I hope you think that, oh, good. Jason and Jeff
1:00:00
read some stuff that I don't need to read
1:00:02
now. Of course, that pisses off media hearing it said
1:00:05
that way. But it's true. You don't have time
1:00:07
to read everything. And maybe in some cases you say,
1:00:09
Oh, that's interesting to me. I'm going to look
1:00:11
it up. I want to learn more, but
1:00:13
that's our choice. It's the same
1:00:15
exact problem we get to with search and
1:00:17
media right now and social media right
1:00:19
now is, is everything need not be the
1:00:21
destination. So
1:00:23
anyway, yeah. So these, these students are out.
1:00:25
I'd say more power to them. I
1:00:28
mean, yeah, I bet
1:00:30
they've got a pathway
1:00:32
here. I think this
1:00:34
will be interesting to
1:00:37
watch. Just drop the
1:00:39
manipulative kind of aspect
1:00:41
with dating and stuff.
1:00:44
Yeah. All right, how about this? The
1:00:46
other example they give, the main example they give is sales
1:00:48
calls. Is
1:00:50
that bad? Only
1:00:53
if you get lied to. Yeah,
1:00:55
it's yeah, I suppose it's bad
1:00:57
if it's dishonest but if it's not
1:00:59
and it well targeted to what
1:01:01
my needs are and sells me what
1:01:03
I want Yeah, totally if I'm
1:01:05
a sales agent I'm gonna go through
1:01:07
training in order to effectively sell
1:01:10
and effectively say the right things and
1:01:12
effectively not say the wrong things
1:01:14
and Recognize cues and all this kind
1:01:16
of stuff if there's a tool
1:01:18
that enables me to do that part
1:01:20
of my job better I
1:01:22
don't see anything wrong with this. Well,
1:01:24
I mean, the key to all sales things,
1:01:26
there's a guy named Jeffrey Gittimer who
1:01:28
writes sales books like my first book. And
1:01:31
so I watched how this operates. And
1:01:33
same as in what I teach in journalism,
1:01:35
it's listening. It's listening
1:01:37
to people, understanding what their needs are, empathizing
1:01:39
with those needs and trying to come up with
1:01:41
solutions for those needs. And if your solution
1:01:43
is in fact legitimate and good, you
1:01:46
make a sale. There you go. Right. That's okay.
1:01:48
In fact, so we hear a lot about how
1:01:50
this is going to come to customer service and
1:01:52
phone mail jail and all the hell we go
1:01:55
through, right? So the agent is reading the script
1:01:57
and, you know, get off the damn script. And
1:01:59
the fear is that AI will be even worse
1:02:01
than that, but it may be far better than
1:02:03
that. It may understand mine. My need better. It
1:02:06
may be more responsive to that need. It may
1:02:08
be able to get to a solution faster. I
1:02:10
was going to say maybe faster. Sometimes
1:02:12
if the AI is given is given the true
1:02:14
agentic power to be able if it has agency
1:02:16
to do so. Yes. Yeah.
1:02:18
Yeah. Very interesting. Let's
1:02:21
take a super quick break. Then we
1:02:23
got a few more stories around things out,
1:02:25
including Oscars kind of becoming a little
1:02:27
bit more welcoming to AI. This
1:02:29
episode of the AI Inside podcast
1:02:31
is sponsored by BetterHelp. I've noticed
1:02:33
a big shift in recent years
1:02:35
towards taking mental health seriously and
1:02:37
I welcome that change because I
1:02:39
recognize first hand the benefits of
1:02:41
taking care of my own mental
1:02:44
health. Therapy can be a transformative
1:02:46
experience and it definitely has been
1:02:48
for me but no question it
1:02:50
can be pricey. Traditional in -person
1:02:52
therapy can run anywhere between 100
1:02:54
to $250 per session, and that
1:02:56
adds up. And it really should
1:02:58
not stand in the way of
1:03:00
getting the help that's needed when
1:03:02
it counts. BetterHelp is online
1:03:04
therapy that can save you, on
1:03:06
average, up to 50 % per
1:03:08
session. With BetterHelp, you pay a
1:03:10
flat fee for each weekly session,
1:03:12
and that adds up to big
1:03:14
cost savings over time. And not
1:03:16
only that, BetterHelp is much easier
1:03:18
to access than traditional therapy because
1:03:20
It's an online experience that meets
1:03:22
you where you are at with
1:03:24
quality care from more than 30 ,000
1:03:26
therapists at a price that makes
1:03:28
sense. You just click a button
1:03:30
to join. Your therapist is
1:03:33
there from wherever you happen to be. You
1:03:35
can get support with anything from
1:03:38
anxiety to relationships to everyday stress.
1:03:40
If you just aren't feeling it
1:03:42
with your current therapist, you can
1:03:44
easily switch to another at any
1:03:46
time. It's mental health within reach
1:03:48
and it's totally worth it. I
1:03:51
know firsthand I used better help
1:03:53
a few years ago myself. It
1:03:55
was incredibly convenient and more importantly
1:03:57
impactful to my life. I felt
1:03:59
heard and supported and that's what
1:04:01
I really needed. Your well -being
1:04:04
is worth it. Visit betterhelp .com slash
1:04:06
a I inside today to get
1:04:08
10 % off your first month.
1:04:10
That's better help H E L
1:04:12
P dot com slash a I
1:04:14
inside. and we thank BetterHelp for
1:04:17
their support of the AI Inside
1:04:19
podcast. You're
1:04:21
the owner of a small
1:04:23
business, which means you're also the
1:04:25
tech guy, and HR, and
1:04:27
personal assistant, and head honcho, and
1:04:29
intern. You could use
1:04:31
another pair of hands. Like the
1:04:33
experts you'll find at Verizon
1:04:36
Small Business Days, April 21st through
1:04:38
27th. Get a free tech
1:04:40
check, special deals, and more. Call
1:04:42
1 -800 -483 -4428 or visit Verizon
1:04:44
.com slash Small Business to book
1:04:46
your appointment. Verizon Business.
1:04:52
All right, the Academy of
1:04:54
Motion Picture Arts and Sciences
1:04:56
officially updated its rules to
1:04:58
allow films that are using
1:05:00
generative AI to compete for
1:05:02
Oscars. So basically coming out
1:05:04
with an official stance to
1:05:06
say, hey, you know what,
1:05:08
just because AI tools were
1:05:10
used, which by the way,
1:05:12
I mean, it's if it
1:05:14
hasn't, you know, Just
1:05:17
overtaken or at least you
1:05:19
know highly influenced how these movies
1:05:21
are made. It's going to
1:05:23
in a very swift Swift move,
1:05:25
but this just ensures that
1:05:27
but they're basically saying like it's
1:05:29
okay as long as there's
1:05:31
a human involved they say They
1:05:33
do emphasize that the films
1:05:35
where human creativity and human involvement
1:05:37
are central will be more
1:05:39
heavily considered heavily considered so not
1:05:41
like requirement Um, but
1:05:43
the filmmakers do not have to disclose the
1:05:46
use of AI that had been considered
1:05:48
as one thing. And, and that's not the
1:05:50
case here. So basically they're saying at
1:05:52
the end of the day, what we've been
1:05:54
talking about AI is just another tool.
1:05:56
And yes, you can use it, just be
1:05:58
responsible. And hopefully you've got humans also
1:06:01
doing things on these things too. You can
1:06:03
still win a Pulitzer if you use
1:06:05
a typewriter. Yeah. Right. Hey, that's a great
1:06:07
example. Yeah. Yeah. I mean, when, when
1:06:09
word processing came in, it wasn't, it wasn't
1:06:11
to the level of moral panic. But
1:06:13
it was, um, some fear that somehow
1:06:15
this was too easy. Somehow this was, this was going
1:06:17
to change things. And it does change. I changed the
1:06:19
way I wrote events. Cause I wrote in the old
1:06:22
typewriter days. So it did,
1:06:24
it changed immensely. It made it easier.
1:06:26
It made it faster. It made it,
1:06:28
it gave me more power. lowers the,
1:06:30
the, the barrier. It, it levels the
1:06:32
plane. It lets too many people in.
1:06:34
Yeah. Right. It doesn't quite gatekeep the
1:06:36
way we used to have it. Bingo.
1:06:38
Bingo. Yep. Yep. Yeah, it
1:06:40
was I'm a Howard Stern fan and
1:06:42
He complained when public podcast started I think
1:06:44
I think I had this argument with
1:06:46
him once on the air. No podcast is
1:06:49
nothing You got to learn radio. You
1:06:51
got to work your way up or just
1:06:53
fall apart right now You see the
1:06:55
Joe Rogans of the world are huge and
1:06:57
even he has to admit that okay.
1:06:59
Well, yeah, they're there. Yeah. Yeah, Howard Stern
1:07:01
see Is he still rocking? I haven't
1:07:03
listened to his show in many years. serious.
1:07:05
He's on serious. He's got to pay
1:07:07
the bill. But yeah, he's he's he's become
1:07:09
an amazing interviewer. Speaking of, yeah. Yeah,
1:07:12
cool. Yeah, I used to really be into his
1:07:14
show. I used to love it. I need to check
1:07:16
it out again. And then finally,
1:07:18
yeah, OK, we're we're back to open
1:07:20
AI. But I thought this was a good
1:07:22
way to lead back. Yeah. They
1:07:24
begin and end with open AI
1:07:26
these days, but I thought this is
1:07:28
a good way to kind of
1:07:30
round out the show. CEO Sam Altman
1:07:32
shared that users say please and
1:07:34
thank you to chat GPT as we
1:07:37
know. We've talked about it before
1:07:39
and this results in tens of millions
1:07:41
of dollars in operational costs says
1:07:43
there are a significant. energy
1:07:45
costs to processing every word that is
1:07:47
typed into a chatbot. Of course, please
1:07:49
and thank you are also words that
1:07:51
enter in there. He couldn't help himself
1:07:53
in saying it's still a good idea
1:07:55
to be nice because you just never
1:07:57
know someday the robot might have mercy
1:07:59
on your soul. Couldn't
1:08:04
help but kind of get that. But how
1:08:06
many useless words all you know that there's a
1:08:08
paradox of text. So I'm writing this book
1:08:10
about the line of type. And if you go
1:08:12
back to the difficulty. of writing
1:08:14
in the past, whether
1:08:16
it was by scribal quill,
1:08:19
right? Or by setting type one letter
1:08:21
at a time. All that was really
1:08:23
laborious, yet people were very long -winded then.
1:08:26
And we get to this age of the
1:08:28
internet, and especially things like Twitter, where we
1:08:31
could go on as long as we want,
1:08:33
and suddenly we cope with new ways to
1:08:35
be as economical with our language as we
1:08:37
can be. It's just kind of interesting
1:08:39
to me. So, On the
1:08:41
one hand, I think that we were
1:08:43
used to using the least words possible for
1:08:45
both Twitter and Google search. And now
1:08:48
AI comes along and says, no, say more.
1:08:50
But whenever you say more, it costs
1:08:52
money. It costs energy. I
1:08:54
mean, it all costs money. I
1:08:56
think people are dumping incredible quantities
1:08:58
of data into their LLMs, you
1:09:01
know, and a short. one or
1:09:03
two letter nicety is not moving
1:09:05
the needle here. I mean, I
1:09:07
guess, you know, in the, in
1:09:09
the sense that everything at this
1:09:11
scale adds up to some large
1:09:14
number, but large number by comparison
1:09:16
to the actual large number, that
1:09:18
is the overall cost of all
1:09:20
words and everything. It's just a,
1:09:22
I mean, it's a spec. It's
1:09:24
a grain of sand. Yeah. And
1:09:27
there's new efficiency. And I remember
1:09:29
when, when, when search and web
1:09:31
came up with cashing. That
1:09:33
was a big deal saved
1:09:35
effort. Oh speaking of which there
1:09:37
was a story that didn't
1:09:39
make the run round I'm trying
1:09:41
to mention real quickly because
1:09:43
we talked about this a few
1:09:45
weeks ago where Sites are
1:09:47
being driven mad by AI Scrapes
1:09:49
scrapers coming in and costing
1:09:51
them a huge amount of bandwidth
1:09:53
and so Wikipedia the Wikimedia
1:09:55
Foundation finally said oh to heck
1:09:58
with this and so they've
1:10:00
put up a 461 ,000 freely
1:10:02
accessible data sets Here, don't
1:10:04
scrape us, go there,
1:10:06
don't. Take it. Okay,
1:10:08
sense. Yeah, we talked about this not too
1:10:10
long ago on the show. Exactly, I was talking
1:10:13
about if news and other sites did this
1:10:15
and said, here, just take it, it's okay. Here
1:10:17
it is. But stop scraping me every day,
1:10:19
because it's costing me money. And
1:10:21
I think this is, as so often
1:10:23
the case, Wikimedia Foundation is ahead of
1:10:25
the rest and thinking smart about this
1:10:27
technology. Don't scrape me,
1:10:29
bro. So you too can go
1:10:31
get that data. on Kaggle. Is it
1:10:33
Kaggle or Kaggle? I guess 2Gs, I guess
1:10:35
Kaggle. I would say probably
1:10:37
Kaggle. It appears to me that
1:10:40
it's Kaggle, but who the heck
1:10:42
knows? Interesting.
1:10:45
Cool. Well, we have reached the end
1:10:47
of this episode of AI Inside. Jeff
1:10:49
Jarvis, thank you so much for being
1:10:51
with me for another hour of getting
1:10:53
smarter on artificial intelligence and everything in
1:10:55
between. tried. Uh, the web
1:10:57
we weave is, uh, a wonderful
1:10:59
book that everybody should read. You
1:11:01
can go to Jeff Jarvis .com to
1:11:03
find that. The Gutenberg parenthesis magazine.
1:11:06
Yes. And, uh, magazine. You
1:11:08
cannot find public parts here though. You, you
1:11:10
said that was, that nobody read that.
1:11:13
I didn't, you can probably find an eBay.
1:11:16
I don't know. Let's see if you go to Amazon. His
1:11:21
public parts was of course my Howard
1:11:23
Stern joke because he wrote private parts.
1:11:26
Yes, yes, indeed. Oh,
1:11:28
OK. You can still get the audiobook. Yeah,
1:11:31
let's see. I got it. You got
1:11:33
it. you go. Yeah. Yeah. Hardcover, six
1:11:35
dollars. Paperback, twelve dollars.
1:11:38
Yeah, these are are you as you can get
1:11:40
it on audiobook. Of course. No, maybe, you
1:11:42
know, let's. There you go. If
1:11:46
you want to go deep into
1:11:48
the catacombs of Jeff's work, you can.
1:11:50
And this is from 2011. Hey,
1:11:53
you've been writing a lot of books for a
1:11:55
long time. It's worth mentioning your whole catalog
1:11:57
from time to time. My ooze. Thank
1:12:00
you, Jeff, so much fun. Thank
1:12:02
you, Jason. Always. a big time.
1:12:04
Thank you to to everybody for
1:12:06
visiting the site, of course, where
1:12:08
you can go to, you know,
1:12:10
find all the ways to subscribe
1:12:13
to the show. aiinsight .show. And
1:12:15
then, of course, there is the
1:12:17
patreon patreon .com slash a I inside
1:12:19
show And I will
1:12:21
just go ahead and
1:12:23
throw that up on the
1:12:25
screen along with our
1:12:27
amazing executive producers, Dr. Jeffrey
1:12:29
Maricini, WPVM .7 in North
1:12:31
Carolina, Dante James, Bono
1:12:33
de Rick, and Jason Knifer,
1:12:36
By the way, he corrected me
1:12:38
on on how to say his
1:12:40
name and Jason Brady are amazing,
1:12:42
amazing patrons that that, you know,
1:12:44
support us on a level as
1:12:46
executive producers. So So .com
1:12:49
slash AI. inside show.
1:12:51
But I think that's about it, y 'all. Thank
1:12:54
you so much. Thank you again, Jeff. A
1:12:56
a lot of fun. We'll see see everybody next
1:12:58
time on another episode of AI Inside. Bye, everybody.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More