Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
In a world of economic uncertainty
0:02
and workplace transformation, learn to
0:05
lead by example from visionary
0:07
C-sweet executives like Shannon Schuyler
0:09
of PWC and Will Pearson
0:12
of Ihart Media. The good
0:14
teacher explains, the great teacher
0:16
inspires. Don't always leave your team
0:18
to do the work. That's been
0:20
the most important part of how
0:22
to lead by example. Listen
0:24
to Leading by Example, executives making
0:27
an impact on the I Heart
0:29
Radio app, Apple Podcasts, or wherever
0:31
you get your podcasts. Hi,
0:36
I'm Morgan Sun, host of Close All
0:38
Tabs from KQPD, where every week
0:40
we reveal how the online world
0:42
collides with everyday life. You don't know
0:44
what's true or not, because you don't know
0:47
if AI was involved in it. So my
0:49
first reaction was, ha ha, this is
0:51
so funny. And my next reaction was,
0:53
wait a minute, I'm a journalist, is
0:55
this real? And I think we will
0:58
see it to a streamer, President, maybe
1:00
within our lifetimes, maybe within
1:02
our lifetimes. It's the I'm
1:05
in terms of life. It's the Breakfast
1:07
Club. The world's most dangerous
1:09
morning show. Hey! Angela E. is
1:11
kind of like the big sister that
1:13
always picks in the boy. That's not
1:16
how it goes. That's not how anything
1:18
goes. Yeah, me's really like a... of
1:20
the best DJs ever. But leave that! Sean
1:22
Lamine is the wildcard. And I'm
1:25
about to give somebody the credit
1:27
they deserve from being stupid. I
1:29
know that's right. What's wrong with
1:31
you? Listen to the Breakfast Club
1:33
weekday mornings from 6 to 10 on
1:35
106 7 the beat. Columbus is real
1:37
hip popping on MB. Hi
1:40
everyone, before we get to the episode,
1:43
I just wanted to lead in and
1:45
say, we are up for a webby,
1:47
I'll be including a link, I know
1:50
it's a pain in the ass, to
1:52
register for something, I'm sorry, I really
1:54
want to win this, never want an
1:56
award in my life. It will be
1:59
in the links, and while you're there
2:01
and registered, look up the wonderful weird
2:03
little guys with Miss Molly Conger, vote
2:05
for both of us, I'm in the
2:08
Best Business podcast episode. One, she's in
2:10
the best crime podcast, episode one. We
2:12
can win this, we can defeat the
2:14
others. And now for the episode. Every
2:18
day I'm punished and killed and you
2:20
love to watch. Welcome to Better Offline.
2:22
We're live from New York City. Recorded
2:24
straight to tape, of course. And I'm
2:27
joined by an incredible cast of people.
2:29
To my right, I have Paris Martin,
2:31
I have the information aiding Paris. What's
2:34
up? What is up? Edward Anguoso of
2:36
the tech bubble newsletter. Hello, hello. And
2:38
the wonderful Allison Murrow of the CNN
2:40
nightcap newsletter. Hi. And Allison, you wrote
2:43
one of my favorite bits of my
2:45
favorite bits of media criticism I've ever
2:47
read criticism I've ever read recently. Do
2:49
you want to actually walk us through
2:52
that pace? Because I think I will
2:54
link it in the notes. Don't worry
2:56
everyone. I'd be happy to. I wrote
2:58
a piece. I think the headline we
3:01
ended up with was like, Apple's AI
3:03
is not the disappointment. AI is the
3:05
disappointment. Yeah. And this was inspired by
3:08
credit to where it's due. I was
3:10
listening to Hard Fork with Kevin Roos
3:12
on our, or my husband and I
3:14
were driving out to the country and
3:17
listening to this and just getting infuriated
3:19
infuriated. And basically their premise was, or
3:21
at least Kevin Ruz's premise, was that
3:23
AI is failing, or sorry, that Apple
3:26
is failing this moment in AI. And
3:28
Apple has been trying, it's been like
3:30
the laggard, you know, that's a narrative
3:32
we've heard in tech media over and
3:35
over. And it's like, Kevin Ruz's point
3:37
was like, oh, well, they should just
3:39
start getting more comfortable with experimenting with
3:42
experimenting and making mistakes and, you know.
3:44
Violating everything that Apple brand kind of
3:46
stands for and like force the AI
3:48
into a consumer product that no one
3:51
wants. And I was like respectfully, no.
3:53
It's just such a funny argument given
3:55
that it was a mistake being made
3:57
by Apple that resulted in the whole
4:00
hoofy PC small group situation. What was
4:02
there? Walk us through that. That was
4:04
specifically how the editor-in-chief of the Atlantic
4:06
ended up in a secret military signal
4:09
chat. Wait, I missed what? How have
4:11
you listened to this? I've met this
4:13
too. I'm sorry, have you guys not
4:16
been online? I don't use the computer.
4:18
Better online. Or better. Oh gosh, I should
4:20
leave them. I've been reading the scrolls.
4:23
So basically the Atlantic came out
4:25
a couple weeks ago with an
4:27
article about how their editor-in-chief one
4:29
day was suddenly added. to a
4:32
signal group chat. Right, signal game.
4:34
Yeah, signal game. But how did this
4:36
Apple lead to this? So the Apple
4:38
thing was, I'm forgetting who exactly reported
4:40
this, this was in the last couple
4:42
of days, that. how it happened was
4:44
the con, like you know that thing
4:46
that comes up in your iPhone
4:48
where it says like, oh a
4:50
new phone number has been found.
4:53
Yeah, it was a suggested contact
4:55
and it happened because someone I
4:57
guess in the government had copied
4:59
and pasted an email containing the
5:01
editor-in-chief of the Atlantic's contact information
5:03
in a message to, I'm forgetting
5:06
whatever government official. One of the
5:08
guys. Yeah, one of the guys.
5:10
And so he ended up combining.
5:12
the Atlantic EIC's information into a
5:14
contact for some government, dude. And
5:16
that's how they ended up in, because
5:19
it then signal when you connected to
5:21
your contacts. I love the computer so much.
5:23
So I mean. That makes me even crazier
5:25
about the hard fork take because it's like
5:27
you can't mess around with something like your
5:29
phone. Well in this particular instance I take
5:32
it all back Apple AI is amazing. It
5:34
gave us one of the best journalism stories
5:36
of the year. You also made a really
5:38
good point in here is message you this
5:40
on the way and you say you make
5:42
this point that there's a popular adage in
5:44
policy circles that the party can never fail
5:46
it can only be failed. It is meant
5:48
as a critique of the ideological gatekeepers who
5:50
may be, for example, blame voters for their
5:52
parties failing rather than the party itself. The
5:54
same fallacy is taking root among AI's biggest
5:56
backers. AI can never fail. It can only
5:58
be failed. And I love this because it's
6:00
you get people at Kevin Roos and
6:03
there was a wonderful clip on the
6:05
New York Times tic-tock of Kevin Roos
6:07
seeming genuinely pissy. He's like I can't
6:09
believe people are mad at AI because
6:11
of Siri and it's like oh what
6:13
they think it's shitty because it's shitty
6:15
like it's their child him and Casey
6:17
act as if we've hurt chat GP.
6:19
Sorry. Claude, they're anthropic boys. And, you're
6:21
saying that Casey's boyfriend works anthropic, I
6:23
know he does the site, it's fucking,
6:26
anyway. It's just so weird because it's
6:28
like, we have to apologize for not
6:30
liking AI enough, and now you have
6:32
the CEO of Shopify saying, actually, you
6:34
have to use it, do you hear
6:36
about this? Yeah, he said what, that
6:38
you have to prove your job can't
6:40
be replaced by AI, or else it
6:42
will be? And he also said that
6:44
now it's going to be Shopify policy
6:47
to include and all of the employee
6:49
performance reviews, both for your self-assessment and
6:51
for your like direct reports and colleagues
6:53
assessment, how much this person use AI?
6:55
And obviously, what's going on there is
6:57
if you are not reporting that you
6:59
use AI all the time for everything,
7:01
you could get fired question. I think
7:03
the line I just tried to overhaul
7:05
hiring practice so that they could have
7:08
AI first or AI only and then
7:10
roll it back because they realized you
7:12
can't replace you can't replace. these jobs.
7:14
My question is, I mean, this is
7:16
something that's brought up on the show
7:18
all the time, but who are these
7:20
people that are encountering the AI assistants
7:22
suddenly plugged into every app and being
7:24
like, yeah, this is actually beneficial to
7:26
my life and this works really well,
7:29
because it sucks every time I use
7:31
it. Or you made the point in
7:33
your article, Allison, as well, it's like,
7:35
if it was 100% accurate, it would
7:37
be really useful. If it's even 98%
7:39
accurate, it's not. I think that was
7:41
the point that, you know, to his
7:43
credit, Casey Newton made in the episode,
7:45
which is that AI is fundamentally an
7:47
academic project right now. And it's like,
7:49
yeah, we can have all the kinds
7:52
of debates about its utility, but ultimately,
7:54
is it a consumer product? And no,
7:56
it's just like it's. failing as a
7:58
consumer product on all fronts. And what's
8:00
crazy as well is, I'm surprised he
8:02
would say that considering everything else he's
8:04
ever said, because he quite literally has
8:06
had multiple articles recently being like, ah,
8:08
consumer adoption is up. He had an
8:10
article the other day where it was
8:13
like, Data provided exclusively from anthropic shows
8:15
that more people are using I am.
8:17
It's like my man. It's 2013 again
8:19
We're past this. You can't just do
8:21
this anymore and lest you or you
8:23
and so going back to mr. Mr.
8:25
Lootkey of Shopify I just want to
8:27
read my favorite part of it. It
8:29
says I use it all the time,
8:31
but even I feel I'm only scratching
8:34
the surface, dot, dot, dot. You've heard
8:36
me talk about A.I.M. Weekly videos, podcast,
8:38
Townhalls and Summit. Last summer, I used
8:40
agents to create my talk and presented
8:42
about that. So all this fucking piss
8:44
and vinegar and the only thing you
8:46
can use it for is to write
8:48
a slop-ridden presentation to everyone about how
8:50
good AI is without specifying what it
8:52
does, I feel like I'm going insane
8:55
sometimes with this stuff. I mean, in
8:57
one way that's great, right? The only
8:59
place you should encounter it is maybe
9:01
the team building retreats, you know, that's
9:03
the utility of this shit. This reminds
9:05
me a lot of like media 2012,
9:07
2013 where it was all pivot to
9:09
video and what's our vertical video strategy
9:11
and it's strategy and it's like... Okay,
9:13
now what's our AI strategy? How are
9:15
we injecting AI into everything we're doing?
9:18
And it's like, well, to what end?
9:20
What's the point? This is something that
9:22
has been driving me. Mad, especially with
9:24
partnerships we're seeing between media firms and
9:26
these AI firms. You know, these are
9:28
firms in the same sector that keeps
9:30
lying to firms about how if you
9:32
integrate artificial intelligence, this time it'll optimize
9:34
your ability to find an audience or
9:36
to get revenue and we can include
9:39
you in some esoteric revenue share program
9:41
or we'll be able to, you know,
9:43
claw back some of the eyeballs and
9:45
the attention that you're interested in seeking.
9:47
But... Each time it's actually just used
9:49
to graph themselves on them. or to
9:51
try to gin up excitement about these
9:53
products, right? What's insane is this company
9:55
has a multi-billion dollar market cap. And
9:57
I'm just going to read point two.
10:00
AI must be part of your GSD
10:02
prototype phase. The prototype phase of any
10:04
GSD product should be dominated by AI
10:06
exploration. Prototypes are meant for learning and
10:08
creating information. AI dramatically accelerates this process.
10:10
How? Fucking how! Like that's the thing.
10:12
I have plans on my PR phone
10:14
but will occasionally bring the AI things
10:16
and I'm every time I'm just like
10:18
this better fucking work. Like just every,
10:21
and to the credit they do, but
10:23
it's like I have clients I turn
10:25
down all the time and I'm like,
10:27
yeah we're doing this. And I'm like,
10:29
is this just the chat box? And
10:31
they're like, no, I'm like, can you
10:33
show me how it works? No. I'm
10:35
like, oh cool, yeah, don't think we're
10:37
going to be a good fit somehow,
10:39
because you don't seem to be able
10:41
to explain what your product does. But
10:44
don't worry, this appears to be a
10:46
problem up to the multi-billion dollar companies
10:48
as well. It's just, it feels like
10:50
the largest mask-off dunce moment in history.
10:52
Just these people who don't do any
10:54
real work, being like, I don't do
10:56
anything real. And the pivot-to-video I think
10:58
is actually a really good comparison, I
11:00
don't think I want to consume video
11:02
in the way that what it was
11:05
like Mike and everyone and they were
11:07
like oh we're going to do this
11:09
video and this we're going to do
11:11
everything video now video first no written
11:13
content it's like I don't know a
11:15
single god damn human that actually does
11:17
that and also the other thing that
11:19
Facebook was just over just claiming like
11:21
averaging out the engagement numbers and everyone
11:23
was wrong. But that was the same
11:26
kind of thing. It's like very clearly
11:28
the people who have their hands and
11:30
the steering wheel are looking at their
11:32
phone. And it's fucking confusing, but it's
11:34
so much worse this time. It feels
11:36
more egregious somehow. Yeah, because it feels,
11:38
I mean, we've had so many of
11:40
these hype cycles kind of back to
11:42
back to back to even the horizontal
11:44
video days of vertical video, to whatever
11:47
the hell the universe was supposed to,
11:49
literally in a... ominous moment
11:51
as I was
11:53
walking in to record
11:55
this, I saw
11:57
a guy wearing a
11:59
leather jacket with
12:01
Bordet Biot Club on
12:03
the back and
12:05
I was like, God,
12:07
what a cool dude,
12:12
but it's like, how long is this going
12:14
to last? I
12:16
have been actually looking at the
12:18
numbers recently and I don't know
12:20
either because for SoftBank to fund
12:23
OpenAI might require them to
12:25
destroy SoftBank, like S &P
12:27
is downgrading the credit rating potentially due Hell
12:29
yeah. Yeah, I know, we're really at this
12:31
point where it's just like, we've gone so
12:33
much further than like the metaverse and crypto
12:35
do because those weren't really like systemic things,
12:37
but this one, I think it's just the
12:39
narrative has carried away so far that people
12:41
are talking about a thing that doesn't exist
12:43
all the time. I mean
12:45
in some elements, it kind
12:48
of reminds me at the near
12:50
the end or near the
12:52
real peak is when we started
12:54
also to see metaverse and
12:56
crypto sustainable refi shit where actually
12:59
we can fight climate change
13:01
with crypto, but putting carbon credits
13:03
on the blockchain and so
13:05
there was a moment where the
13:07
frenzy and the speculative frenzy
13:09
led to like world transformative visions
13:12
that were bullshit and I
13:14
feel like we are heading
13:16
there, we're in that
13:18
direction with artificial intelligence
13:20
where consistently we've been fed, oh,
13:22
this is going to revolutionize everything, but
13:24
it feels like the attempt to graft
13:27
it onto more and more
13:29
consumer products, more and more government
13:31
services, more and more parts
13:33
of daily spheres of life as
13:35
a way to like privatize
13:38
almost everything or commodify everything feels
13:40
like downstream of the way
13:42
cryptos attempt to put everything on
13:44
the blockchain blew up. Yeah,
13:46
I was thinking about this in
13:48
a kind of like fundamentally cultural way
13:50
where I think at some point
13:53
in the last 30 years, there was
13:55
a time when everything coming out
13:57
of Silicon Valley was cool, whether it
13:59
was like or world transformative, it was
14:01
cool and there was like an
14:03
edge to it. And people were
14:06
like, ooh, that's neat. Disruptive.
14:08
Yeah, disruption was everything.
14:10
And like, I think post like
14:12
Facebook Cambridge Analytica era,
14:14
like 2016, Tech has just
14:16
stopped being cool and edgy. It's
14:19
very corporate and like. I don't think
14:21
the rest of corporate America has kind
14:23
of figured out that Silicon Valley is
14:25
not the cool thing anymore. And that
14:27
they're fully capable of being wrong and
14:29
lying. Like that's the other thing. They've
14:32
gotten very good at fundraising and
14:34
marketing. But they're also not like kids
14:36
anymore. Like we talk, I still see people
14:38
referring to open AI as a startup. Palmer
14:41
Lucky is a kid. Palmer Lucky
14:43
is a kid who looks like
14:45
leisurely, Larry, and sells arms. Which
14:47
is a US government. That will
14:49
be A Howard. Just a little
14:51
guy. Well, we refer to them
14:53
as startups, but also, I think one
14:56
of the most accomplished parts of
14:58
AI marketing has been like we.
15:00
always refer to them as labs.
15:02
Yeah. So they seem like so
15:04
academic and like good fundamentally and
15:06
it's like these are companies like
15:08
some of them might be part
15:10
of you know a research institution or
15:13
a university but a lot of
15:15
them are startups. Yeah literal
15:17
companies. Yeah they are companies
15:19
like anthropics public benefit I
15:21
believe and it's just remarkable
15:23
and I think what's happened here
15:25
is that the narrative has gotten
15:28
away to the point that We're really, the
15:30
dance mask off moment I mention
15:32
is people like Mr. Luque from
15:34
Shopify. It's very clear he
15:36
doesn't do any work. Like I think
15:39
that anyone who is just being
15:41
like, yeah, A.I. is the future
15:43
and it's changing everything without specifying
15:45
anything, doesn't do any work. I
15:47
just don't, Bob I. Go from
15:49
Disney said A. I was going
15:51
to check, no, it's not, Bob,
15:53
how is it changing your fucking life,
15:55
you, you lazy past, like, like, like,
15:58
and... It's just, it's so bizarre. But
16:00
it feels like we're approaching this
16:02
insanity level, where you've got people
16:04
like, Shopify being like, oh yeah, it's going
16:06
to be in everything, as like open AI
16:08
burns more money than anyone's ever burned,
16:10
Anthropic lost 5.6 billion last year
16:13
report by the information. It does
16:15
some incredible fucking work on this, I
16:17
should say. And it just doesn't make any
16:19
sense. And it's getting more nonsensical. You're seeing
16:21
like all of the crypto guys have fully
16:23
become AI guys now, and that was something
16:25
I didn't like talking about at first, because
16:27
it wasn't happening in the... now it's all
16:30
of them. They all have AI avatars. This
16:32
guy called Jamie Burke is real, real shithead.
16:34
This guy was like a crypto-metiverse guy, and
16:36
he's now a full AI guy. Another guy
16:38
called Bernard Maher, who is just a harmless
16:40
Forbes. like a kind of like an NPC
16:43
like one of the hollows from Dark Souls
16:45
walking around diagram is increasingly
16:47
becoming a circle yeah but he's on
16:49
to quantum now which is a bad sign
16:51
that's a bearish sign when you got one
16:54
of the Forbes guys moving on to quantum
16:56
we cooked What about thermal? Isn't it?
16:58
Who's thermal? Isn't there some like, uh...
17:00
Oh, thermal, yeah. There's some scam. Thermodynamics,
17:02
fuck a unit because of thermodynamics influence.
17:04
So I know what that means. I
17:06
also know what that means, but if
17:09
anyone could tell me real quick. But
17:11
it's, I think the most egregious one
17:13
I've seen, I sent this all to
17:15
you. And I think you and I
17:17
have talked about this the most. There
17:19
was one of the stupidest fucking things
17:21
I've read in my worthless life. And
17:23
it's called AI 2027. Now, if you
17:26
have not run into this yet as
17:28
a listener, it will be in the
17:30
episode notes. I'm just going to bring
17:32
it up because it is fan fiction.
17:34
It is literally big. Throughout I was
17:37
like, is this fan fiction? This is
17:39
fan fiction. Oh. This is interactive fan
17:41
fiction. Oh. This is interactive fan fiction.
17:43
will future AI agents have can be
17:46
found here. The scenario itself is written
17:48
iteratively. We wrote the first period up
17:50
to mid- 2025, then the following period,
17:52
etc. until we reach the ending. Yeah,
17:55
otherwise known as how you write stuff. Like you write
17:57
it writing an ill and near fashion. We then
17:59
scrap this. and did it again, you
18:01
should have scrapped it in all of
18:03
it. Now this thing is, it's predicting
18:05
that the impact of superhuman AI over
18:08
the next decade will be enormous exceeding
18:10
that of the Industrial Revolution. We wrote
18:12
a scenario that represents our best guess
18:14
that what it might look like otherwise
18:16
known as making stuff up. Not even
18:19
over the next decade, it basically says
18:21
it's going to have superhuman like... catastrophic
18:23
or world-changing impact from the next five
18:25
years. Like by 2023, we're either going
18:27
to be completely overtaken by our robot
18:30
overlords or like at a tenuous piece.
18:32
Yeah. And it's insane as well because
18:34
it has some great headlines like mid-2020,
18:36
China wakes up. I love that China
18:38
was so far behind. You know, it's
18:41
much like... When did this come out?
18:43
This came out like a week ago
18:45
and I've been sent it by a
18:47
lot of people. If you're one of
18:49
the people who sent it, don't worry,
18:52
I'm not mad at you. It's just
18:54
I got sent it by a lot
18:56
of people. This thing is one of
18:58
the most well-written pieces of fan fiction
19:00
ever in that it appears to be
19:03
like a Mancharian candidate situation for idiots.
19:05
Not saying this is about the same
19:07
thing. He wrote up a piece about
19:09
this called the AI forecast predicts some
19:11
storms ahead. Some storms. Some storms. That's
19:13
not even an accurate description of all
19:16
of the storms it predicts. And the
19:18
long and short of this, by the
19:20
way, I have read this a few
19:22
times because I fucking hate myself. The
19:24
long and short of it is that
19:27
a company called Open Brain, who could
19:29
that be? Yeah. Could be anyone. Anyone.
19:31
Open brain. They create a self-learning agent
19:33
somehow. Unclear how. All they add is
19:35
just that how many are terror flops
19:38
it's going to require. And it can
19:40
train itself and also requires more data
19:42
centers than ever. How they get them,
19:44
how those are funded, no fucking clue
19:46
isn't explained. Probably the easy, actually, this
19:49
just occurred to me. Probably the only
19:51
thing. could actually reasonably extrapolate in here
19:53
is the cost of data centers. That's
19:55
the only thing and they don't. Probably
19:57
because they'd be like yeah we need
20:00
a trill an actual trillion dollars to
20:02
do this made-up thing. I do also
20:04
want to add in here that you
20:06
know behind the AI 2027 is you
20:08
know one of the people connected to
20:11
it if I remember correctly is Scott
20:13
Alexander who's this Guy, that's part of
20:15
the rationalist community, which is one of
20:17
the, uh, what are the groups that
20:19
overlaps with effective altruists, the accelerationist. Yeah,
20:22
you know, so if it feels like
20:24
it's, uh, frothy and fan fictionee and
20:26
hypi, that's because these are the same
20:28
people that keep. that are connected to
20:30
pushing constant hype cycles over and over
20:32
and over again. And it's written to
20:35
be worrying as well. It's written to
20:37
up save. It's written to be worrying,
20:39
but it also in the predictions for
20:41
the next two years keeps talking about
20:43
how the stock market is going to
20:46
grow exponentially and do so well. The
20:48
president is going to be making all
20:50
of these wise informed decisions and having
20:52
really deep conversations with the leader of
20:54
open brain. And I was like, are
20:57
you, that's why I asked when did
20:59
this come out. of years ago. But
21:01
no, it is why, like literally it
21:03
says... It's like the men of Jesuits
21:05
kind of planning like a quiz. That's
21:08
out out of it, he's gonna be
21:10
born, he's gonna lead us to the
21:12
promise list. It's so good as well,
21:14
because the people who've sent this to
21:16
me have been very concerned, just because
21:19
they're like, this sounds scary. And I
21:21
really want to be clear, if you
21:23
read something like this and you're like,
21:25
that doesn't make sense to me. The
21:27
AI R&D progress multiplier, what do we
21:30
mean by 50% faster algorithmic progress? We
21:32
mean that open brain makes as much
21:34
AI research progress in one week with
21:36
AI as they would in 1.5 weeks
21:38
without. Who fucking cares, Matt? What are
21:40
you talking about? If a frog had
21:43
wings, he could fly. Like, what you...
21:45
And what's crazy is, and I know
21:47
I'm back on Kevin Roos, it's because
21:49
he's a nakedly captured part of the
21:51
tech industry now. I am in public
21:54
relations and I'm somehow less frothy about
21:56
this, that you tell you fucking everything.
21:58
It is insane that the New York
22:00
Times, at a time when you have
22:02
soft bank being potentially downgraded by S&P,
22:05
you have open AI raising more money
22:07
than they're ever raised, 40 billion dollars,
22:09
except they only receive 10 billion, and
22:11
they'll only get 20 billion more by
22:13
the end of the year if they
22:16
become a non-for profit, which they can't
22:18
do. No, no, no, no, Kevin can't
22:20
possibly cover that. I'll swipe, I can't
22:22
even get my phone to it. I
22:24
do really love all of the incredibly
22:27
solid already feature profiles. I'll put the
22:29
link in there for this. He's got
22:31
75 pockets on his tour. Yeah, my
22:33
man is ready to... Oh, that's where
22:35
it is. And it's just him, like,
22:38
this guy's sitting with his hands class,
22:40
like, staring mournfully into the distance. This
22:42
is what you're spending your time on,
22:44
Kate. And I'm just going to read
22:46
some Kevin Rose. The AI-I prediction well
22:49
is torn between optimism and gloom. A
22:51
report released on Thursday decided lands on
22:53
the side of gloom. That's Ken Roos'
22:55
voice. But my favourite part of this,
22:57
by far. I'm going to take a
22:59
second to get it because Ed I
23:02
sent this to you as well. Where
23:04
is it? So also a lot of
23:06
this is... Oh, here we go. If
23:08
all of this sounds fantastical, well it
23:10
is! Nothing remotely like, who want Mr.
23:13
Kokker to tell Joe and Mr. Liffland
23:15
of predicting as possible with today's AI
23:17
tools, which can barely order a burrito
23:19
and dordash without getting stuck. Thank you
23:21
Kevin. I'm so fucking glad the New
23:24
York Times is on this. And that
23:26
was at the end of the, like,
23:28
the altruistic AI guys. all have told
23:30
themselves this story and they all believe
23:32
it and they think they are like
23:35
the Prometheus bringing fire to the people
23:37
and like warning the people and it's
23:39
like you guys have sold yourself a
23:41
story with no proof. I don't know
23:43
I feel like they just scam artists
23:46
that nothing about this suggests they believe
23:48
in anything. You can just say stuff.
23:50
Look, it works. Literally. The second sentence
23:52
in this is that in two months,
23:54
there will be personal assistance that you
23:57
can prompt them with tasks like order
23:59
me a burrito on Dordash and they'll
24:01
do great stuff. There are so. So
24:03
many things that go into ordering me
24:05
a burrito and door dash. What restaurant
24:08
do I want? What burrito do I
24:10
want? How do I want it to
24:12
get to me? Where am I? It
24:14
can't do any of those things, nor
24:16
will it. He gazed out the window
24:18
and admitted he wasn't sure, and the
24:21
next few years went sure, and the
24:23
next few years went well and we
24:25
kept well and we kept aye out
24:27
of the window and admitted he wasn't
24:29
sure, and the next few years went
24:32
well and we kept well, and the
24:34
next, and the next few years went
24:36
well, and the next few years went
24:38
well, and the next few years went
24:40
well, and... something like that. You know
24:43
one of the things I really really
24:45
love about I don't know it's just
24:47
it's so frustrating because we're we're constantly
24:49
fed these you know sci-fi esoteric futures
24:51
about how AI, powerful AI, superhuman AI,
24:54
is around the corner, and we need
24:56
to figure out a way to accommodate
24:58
these sorts of futures. We need, and
25:00
part of that accommodation means restructuring the
25:02
regulations we have around it, part of
25:05
that accommodation means entertaining experiments, grafting them
25:07
onto our cultural production, grafting them onto
25:09
consumer goods, part of that means just
25:11
like, you know, you know, taking it
25:13
on the chin and figuring out how
25:16
to use statue BT. But in all
25:18
of this. Just more or less sounds
25:20
like you need to, the marketing is
25:22
failing on you and you need to
25:24
step up. Yes, you need to believe.
25:27
You need to believe in this. You
25:29
need to do your part, you know,
25:31
to summon God. And that's the thing.
25:33
It goes back to what you're saying.
25:35
It's like you've failed AI by not
25:37
believing. Yeah, and if you're bad at
25:40
it, it's your fault and not the
25:42
machine's fault. And to learn. To Ed's
25:44
point, I think like all of this
25:46
like, like, predicting of the future, like,
25:48
like, like, like, They have told themselves
25:51
a story that is, this is inevitable.
25:53
And that there are no choices that
25:55
the human beings in the room get
25:57
to make about how this happens. And
25:59
it's like, actually, no, we can make
26:02
choices about how we want our future
26:04
to play out. And it's not going
26:06
to be just Silicon Valley shoving it
26:08
down our throat. And on the subject
26:10
of human choice, if this shit is
26:13
so powerful, why have their mighty human
26:15
choices not made it useful yet? Like
26:17
that's the thing. It's... And you make
26:19
this point in your piece as well.
26:21
It's like, hey I can ever fail,
26:24
can I be failed? Failed by you
26:26
and me, the smooth-brained luddites who just
26:28
don't get it. And it's like, why
26:30
do I have to prove myself? And
26:32
listen, you know, the luddites, they have
26:35
more grooves on their brain than Kevin.
26:37
So, I think it's worth embracing a
26:39
little bit, you know? Ryan
26:48
Reynolds here for Mint Mobile.
26:50
The message for everyone paying
26:52
big wireless way too much.
26:54
Please for the love of
26:56
everything good in this world,
26:58
stop. With Mint you can
27:00
get premium wireless for just
27:03
$15 a month. Of course if
27:05
you enjoy overpaying, no judgments,
27:07
but that's joy overpaying.
27:09
No judgments, but that's
27:11
weird. Okay, one judgment.
27:13
Anyway, give it a
27:15
try at mintmobile.com, Hi,
27:19
I'm Morgan Sun, host of Close All
27:21
Tabs from KQEDD, where every week we
27:24
reveal how the online world collides with
27:26
everyday life. You don't know what's true
27:28
or not, because you don't know if
27:30
AI was involved in it. So my
27:33
first reaction was, ha-ha, this is so
27:35
funny, and my next reaction was, wait
27:37
a minute, I'm a journalist, is this
27:39
real? And I think we will see
27:42
it to a streamer, President, maybe within
27:44
our lifetimes, maybe within our lifetimes. Hey
27:47
Zucco and Kayla from the wake-up call enjoy
27:49
your podcast when you're done. Don't forget about
27:51
us We have a radio show. We try
27:53
to bring a smile to your face every
27:56
morning We also talk to some of the
27:58
hottest country stars of today and we like
28:00
to share some good news with that's what
28:02
I like. Because Lord knows that's hard to
28:05
find. When you're done podcasting your podcast listen
28:07
to us at 92.3 W.C.O.L. Set your preset
28:09
on your radio right now and don't forget
28:11
you can listen to us online on
28:13
the iHAR radio app. And look yeah
28:15
I feel like Rob Horning wrote this
28:17
newsletter a few weeks ago that I
28:19
think he was honing in on this
28:21
point that LLLams and these generative
28:23
AI chat bots and the tools that
28:26
come out of them are in some
28:28
ways a distraction because a lot of
28:30
these firms are pivoting towards how do
28:32
we you know create all these products
28:34
but also how do we figure
28:36
out you know government products that
28:38
we can provide right how do
28:40
we get into defense contracting how
28:42
do we get into arming or
28:44
integrating AI into arms and and
28:47
increasingly it feels like You know,
28:49
yeah, your AI agent's going to
28:51
be able to, not going to
28:53
be able to order your burrito.
28:55
But these firms are also, you
28:57
know, at the same time that
28:59
they're insisting superhuman intelligence is around
29:01
the corner and we're going to
29:03
be able to make your individual
29:06
lives better, are spending a lot
29:08
of time and energy on use
29:10
cases that are actually dangerous, right?
29:12
And it should actually be concerning,
29:14
but they, but... the firms that
29:17
are offering these generative products are
29:19
spending actual you know the stuff
29:21
that they're actually putting their time
29:23
and energy into is you know the
29:25
sort of demonstrably destructive tools under the
29:27
guys in the in the kind of
29:29
murky covering of it's all you know
29:32
artificial intelligence right it's all inevitable it's
29:34
all coming down the same pipeline you
29:36
should accept it yeah and it's I think
29:38
the thing is as well is those guys really
29:40
think that's the next big money maker, but I
29:42
don't think anyone's making any money off of this
29:45
No one wants to talk about the money because
29:47
they're not making any Like no one like I
29:49
think I've read the I've read the earning schools
29:51
and not going to listen of every single company
29:53
that is selling an AI service at this point
29:55
I can't find a single one that wants to
29:57
commit to a number of the Microsoft and they'll
30:00
only talk annualized, which is my favorite
30:02
one. ARR. AR. AR. AR. AR. AR.
30:04
AR. But the thing is, ARR traditionally
30:06
would mean an aggregate rather than just
30:09
12 times the last biggest month, which
30:11
is what they're doing. No, that's the
30:13
classic setup. Used to be an ARR.
30:16
No, I refuse to. No, my client's
30:18
ass is to the ground with that
30:20
one, because it's like you can't just
30:23
fucking make up a number, unless you're
30:25
an AI than you absolutely can. other
30:27
than all the others I've listed, is
30:29
that I feel like in their position
30:32
and in the position of anyone with
30:34
any major voice in the media, skepticism
30:36
isn't like something you should sometimes bring
30:39
in. It's like you don't have to
30:41
be a grizzled hater like myself, but
30:43
you can be like, hey, even if
30:46
this did work, which it doesn't, how
30:48
does this possibly last another year? And
30:50
the reaction is, no, actually it's perfect
30:52
now and will only be more perfect
30:55
in the future. And I still get
30:57
emails from people, because I said once
30:59
on an episode, if you have a
31:02
use for AI, please email me. Regret
31:04
of mine. Every time I get an
31:06
email like this, so it's very simple.
31:09
I've set up seven or eight hours
31:11
worth of work to make one prompt
31:13
work, and sometimes I get something really
31:15
useful. It saves me like 10 minutes.
31:18
And you're like, great. And what for?
31:20
It's like, oh, just some productivity things,
31:22
what productivity things, they stop responding. And
31:25
it's just. I really am shocked we
31:27
got this far. I'm going to be
31:29
honest. At this point, I will never
31:32
be tired because my soul burns forever,
31:34
but it's exhausting watching this happen and
31:36
watching how it's getting crazier. I thought
31:38
like as things got worse, people would
31:41
be like, well, CNN stepping up, but
31:43
it's like watching the times and some
31:45
parts of the journal still feed this.
31:48
Also, the journal has some incredible critical
31:50
work on that. It's so bizarre. The
31:52
whole thing is just so bizarre. and
31:54
has been so bizarre to watch an
31:57
attack media. I mean I think part
31:59
of it is also just because investors
32:01
have poured a lot of money into
32:04
this and so of course they're... going
32:06
to want to back what they have
32:08
spent hundreds of millions or billions of
32:11
dollars on. And much of the tech
32:13
media involves reporting on what those investors
32:15
are doing, thinking, and saying. And whether
32:17
or not what those people are saying
32:20
or doing, it's often not based in
32:22
reality. Yeah, I say as not a
32:24
member of the tech media. So I
32:27
have like kind of a general assignment,
32:29
business markets, Econ. That's kind of my
32:31
jam. When I come, when like, A.I.
32:34
first started becoming the buzzword, like, ChatGPT
32:36
had just come out, I was like,
32:38
oh, this sounds interesting. So I was
32:40
paying attention, like a lot of journalists
32:43
were. And, you know, like we've hit
32:45
limitations. And I think part of the
32:47
reason it's gotten so far is because
32:50
the narrative is so compelling, curing cancer.
32:52
Yeah. We're going to, we're going to
32:54
end hunger. Nice. Okay, how? How? Also,
32:57
the problem of hunger in the world
32:59
is not that we don't grow enough
33:01
food. It is a distribution problem. It
33:03
is a sociologic, it is a complicated
33:06
problem. What actually is AI going to
33:08
do? Also that you're going to need
33:10
human beings to distribute it just like
33:13
if you push them one step if
33:15
you read the 2027 AI thing It
33:17
explains that the AI is going to
33:20
give the government officials such good advice
33:22
They'll be actually really nice and caring
33:24
to work and what's crazy is here's
33:26
the thing and I'm glad you brought
33:29
up one thing I've learned about politics
33:31
particularly recently But in historic means when
33:33
the government gets good advice they take
33:36
it every time every time they're like
33:38
this will this is economically good like
33:40
Medicare for all which we've of course
33:43
had forever and never and came close
33:45
to numerous times decades ago versus now
33:47
when we have him and I think
33:49
the other funny thing is as well
33:52
with what you were saying Allison is
33:54
like yeah It's going to cure cancer.
33:56
Okay, can it do that? No. Okay,
33:59
it's going to cure hunger. Can it
34:01
do that? No. Okay, easy then. Perhaps
34:03
it could make me an appointment. Also,
34:06
no. Can you buy something with it?
34:08
No. Can it take this spreadsheet and
34:10
move stuff around? Maybe... Sometimes? It can
34:12
write a robotic sounding script for you
34:15
to make the appointment yourself. Wow. You
34:17
know. I mean, I would even say
34:19
that like, I could give... the benefit
34:22
of the doubt to researchers who are
34:24
really working on the scientific aspects of
34:26
this like I don't I'm not a
34:28
scientist I don't know how to cure
34:31
cancer but if you're working with an
34:33
AI model that can do it like
34:35
God bless but businesses actually do take
34:38
money-making advice and money-making technology when it's
34:40
available and I think about this all
34:42
the time with crypto which is another
34:45
area I cover a lot it's like
34:47
If it were the miracle technology that
34:49
everyone or its proponents have said it
34:51
is, businesses would not hesitate to rip
34:54
up their infrastructure to make more money.
34:56
And like no one's doing it. And
34:58
it's like, oh, well, they just haven't,
35:01
they haven't figured out how to optimize
35:03
it yet. And it's like, that sounds
35:05
like a failure of the product and
35:08
not a failure of people using it.
35:10
So I get back to the whole
35:12
like. Yeah, it cannot fail. It can
35:14
only be failed. And it's the same
35:17
with crypto and a lot of other
35:19
tech, where it's just like, this is
35:21
not a product that people are. are
35:24
hankering for. And I think part of
35:26
the notable thing is when we do
35:28
see examples of large businesses being like,
35:31
oh yeah, we're gonna change everything about
35:33
our business and integrate AI, we're gonna
35:35
be an AI first company, the products
35:37
that end up coming out of that
35:40
are there's an AI chat bot in
35:42
my one medical app now. Cool, that
35:44
does nothing for me. When I'm trying
35:47
to search the Amazon comments on a
35:49
product, suddenly the search box is replaced
35:51
with an AI chat bot. That's not.
35:54
doing even one-tenth of what you've promised.
35:56
It's just the same product every fucking
35:58
time. It's just an AI. chatbot that
36:00
isn't super helpful. And it's great. I
36:03
remember back in 2015, 2016, I had
36:05
an AI chatbot company. They took large
36:07
repositories of data and turned it into
36:09
a chatbot, you'd use. I remember pitching
36:11
reporters at the time, and then being
36:13
like, who fucking cares? Who gives a shit?
36:15
This will never be... Like, a decade
36:18
later, everyone's like, this is literally God.
36:20
I cannot wait to go to the office
36:22
of a guy who wrote fan fiction about
36:24
this and talk to him about how scared
36:26
I am now. I can't wait for AGI.
36:28
And I've also said this before, but what
36:31
if we make AGI, none of them are
36:33
going to do it doesn't exist, and it
36:35
didn't want to do any work? That's the
36:37
other thing, like they're not, they don't, Casey
36:39
Cagaw, a friend of the show made this
36:42
point, it's made it to me numerous times,
36:44
which is, they talk about AGI and Ruz
36:46
did this as well. Like, AGI, this, AGI,
36:48
that. They don't want to define it because
36:50
if you have to start defining AGI,
36:53
you have to start defining
36:55
AGI, you have to start
36:57
talking about things like person
36:59
legit. And hey, how many is one? Is
37:01
it one unit? Is it a virtual machine?
37:03
Like there are real tangible things. And you
37:05
know they don't want to talk about that
37:08
shit because you even start answering one of
37:10
those and you go, oh right, we're not
37:12
even slightly closer. We don't even know how
37:14
the fuck to do it. A single one
37:17
of these things ever. And I honestly, the
37:19
person I feel bad for this is a
37:21
joke is Blakele Mine. I think his name
37:23
was from Google. If he'd have come out
37:26
like three years later and said that he
37:28
thought the computer, this guy from Google who
37:30
thought the man, the AI there was... The
37:32
guy who was like, the chat butt
37:34
is real and I love it.
37:37
Yeah, had that come out three
37:39
years later, he'd be called Kevin
37:41
Roos, because that's exactly what Kevin
37:43
Roos wrote about being AI, it's
37:45
like being AI, told me to
37:47
leave my wife. And Kevin, if
37:49
you ever fucking here, this man,
37:51
you're worried about me do you, I'm
37:53
a... Gagit Gizmo God. I love my dude dad's
37:55
I love my shit. I really do if this
37:57
was gonna do something fun I'd have done it
37:59
like I I've really spent time trying and
38:01
I've talked to people like Simon Wilson,
38:04
Max Wolfe, two people who are big
38:06
L&M heads who are, who disagree with
38:08
me on numerous things, but their reaction
38:10
is kind of, I'm not going to
38:13
speak exactly for them, it is basically,
38:15
it actually does this, you should look
38:17
at this thing, it does not, this
38:19
is literally God, but it all just
38:21
feels unsustainable economically, but also I feel
38:24
like the media is in danger when
38:26
this halls a part too, because... The
38:28
regular people I talked to about chat
38:30
GPT, I pretty much hear two use
38:33
cases. One, Google Search isn't working, and
38:35
two, I need someone to talk to,
38:37
which is a worrying thing. But, and
38:39
I think by the way, that use
38:42
case is just, that's a societal thing,
38:44
that's a lack of community, lack of
38:46
friendship, lack of access to mental health
38:48
services, and also could lead to some
38:51
terrible outcomes. But for the most part.
38:53
I don't know why I said for
38:55
the most part. I've yet to meet
38:57
someone who uses this every day and
39:00
I've yet to meet someone who really
39:02
cares about it. Like if I didn't
39:04
have my little anchor battery packs, I'd
39:06
scream. If I couldn't have permanent power
39:08
everywhere. Like if I couldn't have permanent
39:11
power everywhere. Like if I couldn't like
39:13
listen to musical day, that'd make me
39:15
real sad. If I couldn't access chat,
39:17
GPT, I don't know. I feel like
39:20
people's response to the media is going
39:22
to be negative too because there's so
39:24
many people that boosted it. There was
39:26
a verge story. There was a study
39:29
that came out today, I'll link it
39:31
as well in the notes, where it
39:33
was a study found that most people
39:35
do not trust, like regular people do
39:38
not trust AI, but they also don't
39:40
trust the people that run it, and
39:42
they don't like it. And I feel
39:44
like this is a thing that the
39:47
media is going to face at some
39:49
point. And Roose this time. Baby you
39:51
got away with the crypto thing you're
39:53
not this time I'm going to be
39:55
hitting you with the TV off to
39:58
share every day. But it's just I
40:00
don't think members of the media realize
40:02
the backlash is coming and when it
40:04
comes it's going to truly it is
40:07
going to lead to an era of
40:09
cynicism, true cynicism in society that's already
40:11
growing about tech, but specifically I think
40:13
it will be a negative backlash to
40:16
the tech media. And now would be
40:18
the great time to unwind this, versus
40:20
tripling down on the fan fiction that,
40:22
and I have been meaning to read
40:25
this out, my favorite part of this,
40:27
by far, I say it, and of
40:29
course, flawlessly have this ready, why our
40:31
uncertainty increases substantially beyond 2026? Our forecast
40:33
from the current day through 2026 is
40:36
substantially more grounded than what follows. Thanks
40:38
Motherfasm. Awesome! That's partially because it's nearer.
40:40
But it's also because the effects of
40:42
AI on the world really start to
40:45
compound in 2027. What do you mean?
40:47
They don't. You're claiming that. And I
40:49
just, I also think that there's the
40:51
greatest subtle problem that we have too
40:54
many people who believe the last smart
40:56
person they listened to and I say
40:58
that as a podcast runner. Like the
41:00
last invest that they talked to, the
41:03
last expert they talked to someone from
41:05
a lab. Yes, yes. Well, I think
41:07
that gets to, if you just push
41:09
the proponents, and this is like, I've
41:12
come into AI skepticism as a true,
41:14
like, I'm interested in this, I'm interested
41:16
in what you're pitching to the world,
41:18
and when I hear, and I hear
41:20
like, CEOs of AI firms get interviewed
41:23
about this all the time and they
41:25
talk about this future where everyone just
41:27
has a life of leisure and we're
41:29
lying around writing poetry and touching grass
41:32
and like everything's great no one has
41:34
to do hard labor anymore they have
41:36
that vision or they have like the
41:38
you know P doom of 75 and
41:41
everything is going to be terrible and
41:43
but no one has a really good
41:45
concept and that's why this is so
41:47
funny the fan fiction of like what
41:50
happens in 2027 it's like no one
41:52
has laid out any sense of like
41:54
how the job creation or destruction will
41:56
happen like in this piece they say
41:59
like oh there's gonna be more jobs
42:01
in different areas but some jobs have
42:03
been lost and it's like how why
42:05
they get what job They get oddly
42:07
specific on some things, then the meaningful
42:10
things, they're like, yep, there'll be jobs.
42:12
Yeah, and the stock market's just going
42:14
to go up. And the number go
42:16
up all the time, as it is
42:19
right now, it's really important. Yeah, I
42:21
believe they say in 2028. Agent 5,
42:23
which is the super AI, is deployed
42:25
to the public and begins to transform
42:28
the economy. People are losing their jobs,
42:30
but Agent 5 instances in the government
42:32
are managing the economic transition so adroitly
42:34
that people are happy to be replaced.
42:37
GDP growth is stratospheric. Government tax revenues
42:39
are growing equally quickly, and Agent 5
42:41
advised positions show an uncharacteristic generosity towards
42:43
the economically dispossessed. You know what this
42:46
is? We failed to uphold like public
42:48
arts education in America and a bunch
42:50
of kids got into coding and know
42:52
nothing but computers and so they can't
42:54
write fan fiction. Yeah. No one's fucking.
42:57
Not enough people spent time in the
42:59
minds of fan fiction.net and it's showing.
43:01
Yeah. Like this is clearly this is
43:03
just like someone wanting to have a
43:06
creative like vision of the future and
43:08
it's like it's not. Interesting or compelling?
43:10
It's joyless. I mean, that's why they
43:12
brought him on. That's why they brought
43:15
Scott Alexander on, to write this narrative,
43:17
right? Because that's what he spends a
43:19
lot of time doing in his blog
43:21
is trying to beautify or flesh out
43:24
why this sort of future is inevitable.
43:26
Yeah. you know, why we need to
43:28
commit to accelerating technological progress as much
43:30
as possible, and why the real reactionary
43:32
or, you know, anti- progress perspective is
43:35
caution or concern or skepticism or criticism,
43:37
if it's not nuanced in a direction
43:39
that supports progress. I just feel like
43:41
a lot of the AI safety guys
43:44
are grifters too. I'm sorry, they love
43:46
saying alignment. Just say pay me. I
43:48
know that we should have, I get
43:50
the occasional email about this being like,
43:53
you can't hate AI safety, it's important.
43:55
It is important. Generative AI isn't AI,
43:57
it's just trying to. I'll fucking accept
43:59
it. If they cared about the safety
44:02
issues, they'd stop burning down zoos and
44:04
feeding entire lakes to generate one busty
44:06
garfield, as I love to say. They
44:08
would also be thinking about the actual
44:11
safety issues of what could this generate,
44:13
which they do. You can't do anarchist cookbook
44:15
shit. It's about as useful. Phil Broughton, friend
44:17
of the show, would be very angry for
44:19
me to bring that up. But the actual
44:21
safety things of it steals from people is
44:23
destroying the environment. It's unprofitable and unsustainable. These
44:26
aren't the actual, these are actual safety issues.
44:28
These are actual problems with this. They don't
44:30
want to solve those. And indeed, the actual
44:32
other safety issue would be, hey, we gave
44:34
a completely unrestrained chatbot to millions of people
44:36
and now they're talking to it like a
44:39
therapist. That's a fucking... That's a safety issue.
44:41
No, they love that. They love it. I
44:43
do think that one criticism of
44:45
the AI safety initiatives that is
44:47
incredibly politically salient and important right
44:49
now is that they are so
44:51
hyper focused on the long-term thousand
44:53
hundred years from now future where
44:55
AI is going to be inside
44:57
all of us and we're all
44:59
going to be, you know, robots
45:01
controlled by and over like that
45:03
they are not paying attention to
45:05
literally any of the harms happening
45:08
right now. do something at work when they
45:10
get aged out. You know, it's like
45:12
when 972 MAG reported on how Israel
45:14
was using or trying to integrate artificial
45:16
intelligence until it generating its killless and
45:18
targets so much so that they started
45:20
targeting civilians and use that to fine-tune
45:22
targeting of civilians. You know I saw
45:24
almost nothing in the immediate aftermath of
45:26
this reporting from the AI safety community
45:28
You know no almost no interest in
45:31
like talking about a very real use
45:33
case where it's being used to murder
45:35
as many civilians as possible Silence you
45:37
know and that's a real short-term concern
45:39
that we should have but that's that would
45:41
require the AI safety people to do something
45:43
and what they do is they get into
45:45
work They're making quarter of a million dollars
45:47
a year they get into work. They load
45:49
slack they load slack they load Twitter and
45:51
that's what they do for being being being
45:53
like By 2028, the AI will have
45:55
fucked my wife. And everyone's like, God
45:57
damn it! No! Not our wives! The
46:00
final frontier. But it is all like,
46:02
they want to talk about 10, 15,
46:04
20 years in the future because if
46:06
they had to talk about it now,
46:09
what would they say? Because I could
46:11
give you AI 2026, which is open
46:13
AI runs into funding issues, can't pay
46:15
Corweave, can't pay Crusoe to build the
46:17
data centers in Abilene, Texas, which requires
46:20
Oracle who have raised debt to fund
46:22
that, to take a bath on that.
46:24
Their stock gets here. Corweve collapses because...
46:26
most of Corby's revenue is now going
46:28
to be open AI. Antropic can't raise
46:31
because the funding climate has got so
46:33
bad. Open AI physically cannot raise in
46:35
2026 because Softbank had to take on
46:37
murderous debt to even raise one round.
46:39
And that's just with like one minute.
46:42
You're getting me excited here. No, no,
46:44
no. Next newsletter, baby, and probably a
46:46
two-part. But that's the thing. They don't
46:48
want to do these because... They get,
46:50
okay, they would claim I'd get framed
46:53
as a skeptic. They also don't want
46:55
to admit the thing in front of
46:57
them, because the thing in front of
46:59
them is so egregiously bad. With crypto,
47:02
it was not that big. Metavas, it
47:04
was not that big. I do like
47:06
that Metaburn like 40 billion dollars, and
47:08
there's Yahoo finance piece about this, just
47:10
on mismanagement. It's just like they should
47:13
get. Metai, they should just change. Oh,
47:15
they should add an eye at the
47:17
end. An eye at the end. It's
47:19
just, if anyone talks about what's actually
47:21
happening today, which is borderline identical what
47:24
was happening a year ago, let's be
47:26
honest, it's April. 2025. April 2024 was
47:28
when I put up my first piece
47:30
being like, hey, this doesn't seem to
47:32
be doing anything different. And it still
47:35
doesn't. Even with reasoning, it's just more...
47:37
You just wait for Q3, agent forces
47:39
going on. Yeah, agent, so agent zero
47:41
is going to come out. Actually, the
47:43
information reported that sales force is not
47:46
having a good time sailing agent force.
47:48
You never guess why. Wow. Turns out
47:50
that it's not that useful due to
47:52
the problems of generative AI. If only
47:54
someone had said something which the information...
47:57
Backed on the information a little bit,
47:59
but they are actually doing insanely good
48:01
work. Like Corey Weinberg... at least a
48:03
car dizzy, Paris of course, but I'm
48:06
specifically talking about the AI, the AI
48:08
team is fantastic. And like, it's great
48:10
because we need this reporting for when
48:12
this just collapsed so that we can
48:14
say what happened. Because it's going to,
48:17
if I'm wrong, and man would that
48:19
be embarrassing, just gonna be honest, like
48:21
if I'm wrong here, I'm gonna look
48:23
like a huge idiot. But if if
48:25
I'm right here like... Everyone has over
48:28
leveraged on one of the dumbest ideas
48:30
of all to it. Like silly, silly.
48:32
It would be like crypto. It would
48:34
be like if everyone said actually crypto
48:36
will replace the US dollar and you
48:39
just saw like the CEO of Shopify
48:41
being like, okay, I'm gonna go buy
48:43
a beer now using crypto. No, this
48:45
is gonna take me 15 minutes. Sorry.
48:47
That's just for you to get the
48:50
money. Actually it's gonna be more like
48:52
20. The network's busy. Okay, well, how's
48:54
your day. Oh, oh, oh, you use
48:56
your day. Oh, you use money. Oh,
48:59
you use money. Oh, you use money.
49:01
Oh, you use money. Oh, you use
49:03
money. Oh, you use money. Oh, you
49:05
use money. Oh, you use money. Oh,
49:07
you use money. Oh, you use money.
49:10
Oh, oh, oh, oh, oh, oh, oh,
49:12
oh, oh, oh, oh, oh, oh, oh,
49:14
oh, oh, oh, oh, oh, oh, But
49:16
it's what we're doing with AI. It's
49:18
like, well, AI is changing everything. How?
49:21
It's a chat book. What do we
49:23
have an Uber scenario where maybe they
49:25
abandoned the dream of like this $3
49:27
trillion dressable market that's worldwide? They abandoned
49:29
the dream of like being monopoly in
49:32
every place and focus on a few
49:34
markets. And some algorithmic price fixing so
49:36
that they can figure out how to
49:38
juice fares as much as possible, reduce
49:40
the wages as much as possible, and
49:43
finally eke out that profit. What if
49:45
we see, you know, some of these
49:47
firms, they pull back on the ambition
49:49
or the scale, but they persist and
49:51
they sustain themselves because they move on
49:54
to some smaller vision. Like Occam's razor,
49:56
that's the most likely situation is that,
49:58
you know, AI tools are useful in
50:00
some way for some slice of people
50:03
and make... a lot of maybe maybe
50:05
let's be optimistic it makes a sick
50:07
a sizable chunk of a lot of
50:09
people's jobs somewhat easier like it's okay
50:11
it was that worth spending billions and
50:14
billions of dollars and also burning down
50:16
a bunch of trees has that happened
50:18
though like I'm just That could be,
50:20
I think, best case scenario. No, I'm
50:22
not saying you're wrong, I'm just saying
50:25
like, we haven't even reached that yet,
50:27
because with Uber, it was this incredibly
50:29
lossy and remains quite a lossy business,
50:31
but still delivers people to and from
50:33
places and objects from places. Yeah, you
50:36
know, you don't have to, you know,
50:38
as much as I hate them, we're
50:40
going to, you know, you don't have
50:42
that, less drunk driving, you know, you
50:44
know, and some transit in parts of
50:47
parts of parts of cities, in parts
50:49
of cities, This is like if Uber,
50:51
if every ride was $30,000 and every
50:53
car weighed 100,000 times. When you factor
50:56
in the externalities, like pollution maybe. But
50:58
that's the crazy thing. I think generative
51:00
AI is so much worse as well,
51:02
pollution wise. But even pulling that back,
51:04
it's like, I think Open AI just
51:07
gets wrapped into... Copilot. I think that
51:09
that's literally they just shut this shit.
51:11
They're like they absorb Sam Altman into
51:13
the hive mind and he be but
51:15
I think there's a my chaos pick
51:18
for everyone is a Saturn Adela is
51:20
fired and Amy hood takes over if
51:22
that happens I think is Prometheus the
51:24
one who can see stuff I'm fucking
51:26
I don't read no he gave fire
51:29
to more or technique I just spared
51:31
fire. It's It's just frustrating. It's frustrating
51:33
as well because a lot of the
51:35
listeners on the show email me and
51:37
they took like teachers of being like
51:40
are they forcing AI in here, librarian,
51:42
oh there's AI being forced out. I
51:44
mean the impact on the educational sector
51:46
especially with public schools, it's really terrifying
51:48
especially because The school districts and schools
51:51
that are being forced to use this
51:53
technology, of course, are never the private
51:55
wealthy schools. It is the most resource-starved
51:57
public schools that are going to have
52:00
budgets for teachers increasingly cut. Meanwhile, they
52:02
do another AI contract and off-source, like
52:04
lesson, the sort of things that these
52:06
companies, the ed tech AI things, pitch
52:08
as their use case is. lesson planning,
52:11
writing, report. cards. Basically, all the things
52:13
that a teacher does other than physically
52:15
being there and teaching, which in some
52:17
cases the companies do that too. They
52:19
say instead of teaching, put your kid
52:22
in front of a laptop and they
52:24
talk to a chatbot for an hour.
52:26
And that's the thing. And the school
52:28
could of course, I don't know, spend
52:30
money on something that's already being spent,
52:33
which is teachers have to buy their
52:35
own fucking supplies all the time. Teachers
52:37
have to just spend a bunch of
52:39
their money on their money on the
52:41
school. They should ban it at universities
52:44
as well. Everything I'm hearing there is
52:46
just like real fucking bad like the
52:48
I mean the issue is from talking
52:50
to university professors. It's like impossible for
52:53
universities to ban it. Can you elaborate?
52:55
I haven't talked to any. I guess.
52:57
Professors are, the obvious example is like
52:59
essays, like professors get AI written essays
53:01
most the time and they can't figure
53:04
out whether they are AI written or
53:06
not. They just notice that all of
53:08
your students seem to be suddenly be
53:10
doing worse in class while having similar
53:12
output of written assignments. There are very
53:15
few tools for them to be able
53:17
to accurately detect this and figure out
53:19
what to do from it. Meanwhile, I
53:21
guess getting involved in trying to like
53:23
prosecute someone for doing this within the
53:26
academic system is a whole other thing.
53:28
But on the, in K through 12,
53:30
especially, it's been kind of, it's been
53:32
especially frustrating to see that some of
53:34
the biggest pushers of AI end up
53:37
being teachers themselves because they are overworked,
53:39
underpaid, have no time to do literally
53:41
anything, and they have to write God
53:43
knows how many lesson plans and IEPs
53:45
for kids with disabilities, and they can't
53:48
do it all. So like, well, why
53:50
don't I just plug this into what's
53:52
essentially a chat GPT wrapper? And that
53:54
results in worse outcomes for everyone, probably.
53:57
So I have some personal experience with
53:59
IEP. I don't think they're doing it
54:01
there, but they're definitely doing it elsewhere.
54:03
That's one of the things that these
54:05
tools often pitch themselves as you can
54:08
create IEPs. I want to put my
54:10
hands around someone. I want to put
54:12
my hands around someone's fucking. Can you
54:14
describe what an IEP is? I forget
54:16
what it stands for. I think it's
54:19
individual education plan. I don't, I might
54:21
be wrong, but that's the bad. It
54:23
is generally the plan that's put for
54:25
a child with special needs, so autism
54:27
being one, it names exactly what it
54:30
names exactly what it is that they
54:32
have to do. like what the teacher's
54:34
goals will be like socio they like
54:36
legally have to do all the things
54:38
in that document like it's a and
54:41
it changes based on the designation they
54:43
get and so like it's different if
54:45
you get there's like an emotional instability
54:47
one I believe and nevertheless there's like
54:50
separate ones and each one is like
54:52
the goals of the where the kid
54:54
is right now where the kid will
54:56
be in the future and so on
54:58
and so forth the idea that someone
55:01
used chat gPT for at least I
55:03
Wow, how disgraceful as well because it's
55:05
all this weird resource allocation done by
55:07
people and I feel like the overarching
55:09
problem as well as it's the people
55:12
making these decisions, putting this stuff in,
55:14
don't do work. It's school administrators that
55:16
don't teach, it's CEOs that don't build
55:18
anything, it's venture capitalists that have an
55:20
inter- interactive with the economy or anyone
55:23
without a Patagonia sweater and decades. And
55:25
it's these, and again these VCs, they're
55:27
investing money based on how they used
55:29
to make money, which was they invest
55:31
in literally anything and then they sold
55:34
it to literally anyone. And that hasn't
55:36
worked for 10 years. Allison, you mentioned
55:38
the thing 2015-ish. That was when things
55:40
stopped being fun. That was actually the
55:42
last time we really saw anything cool.
55:45
It was around the Applewatch era watch
55:47
era. Really the end of the hype
55:49
cycles the real the success ones at
55:51
least they haven't had one since then
55:54
VR AR X R um crypto metiverse
55:56
the indigo and kickstarter era sharing economy
55:58
Yeah, but these all had the same
56:00
problem which was they cost more money
56:02
than they made and they weren't scalable
56:05
and this is the same problem we've
56:07
had. What we may be facing is
56:09
the fact that the tech industry does
56:11
not know how to make companies anymore.
56:13
Like that may actually be the problem.
56:16
Can I add one thing to what
56:18
you said about people who don't work?
56:20
I think there are people in Silicon
56:22
Valley and I don't, I'm going to
56:24
get a million emails about this, but
56:27
there are a lot of Silicon Valley
56:29
men who are white men who don't
56:31
really socialize. Yep. And I think they
56:33
are kind of propagating this technology that
56:35
allows... others to kind of not interact.
56:38
Like so much of ChatGPT is designed
56:40
to like subvert human interactions. Like you're
56:42
not going to go ask your teacher
56:44
or ask a classmate, hey, how do
56:46
we figure this out? You're just going
56:49
to go to the computer? And I
56:51
think that culturally, like I, you know,
56:53
people who grew up with computers, God
56:55
bless. But, you know, we need to.
56:58
also value social interaction and it's interesting
57:00
that there are these, there's this very
57:02
like small group of people often who
57:04
lack social skills, propagating a technology to
57:06
make other people not have social skills.
57:09
I think there's also a class aspect
57:11
to that because... Totally. Particularly with like
57:13
food on the table, but one thing
57:15
I grew up was I don't trust
57:17
any easy fixes Nothing is ever that
57:20
easy is something that I kind of
57:22
a if something seems too good to
57:24
be true to accessible There's usually something
57:26
you're missing about the incentives or the
57:28
actual output So no, I wouldn't trust
57:31
a computer to tell me how to
57:33
fix something because I don't fucking like
57:35
you made that up I like it
57:37
isn't this easy. There's got to be
57:39
a problem problem. This hallucinations Hi,
57:54
I'm Morgan Sun, host of Close All
57:56
tabs from KQEDD, where every week we
57:58
reveal how the. world collides with everyday
58:00
life. You don't know what's true or
58:02
not because you don't know if AI
58:05
was involved in it. So my first
58:07
reaction was, ha ha, this is so
58:09
funny and my next reaction was, wait
58:11
a minute, I'm a journalist, is this
58:13
real? And I think we will see
58:15
it to a streamer, president, maybe within
58:17
our lifetimes. You can find Close All
58:19
tabs wherever you listen to podcasts. So,
58:21
we didn't really lead into that ad
58:23
break, but you're going to just have
58:25
to like it. I'm sure all of
58:27
you are going to send me little
58:29
emails, little emails about the ads that
58:31
you love. Well, I've got to pay
58:33
for my diet Coke somehow. So, back
58:35
to this larger point around chat-GPT and
58:37
why people use it and how people
58:39
use it. I think another thing that
58:41
just occurred to me is... Have you
58:43
ever noticed that Sam Altman can't tell
58:46
you how it works and what it
58:48
does? You haven't noticed none of these
58:50
people will tell you what it does.
58:52
I've read everything Sam at this point,
58:54
listening to hours of podcasts, he's quite
58:56
a boring twirp. But on top of
58:58
that, for all is the yapping and
59:00
yamoring, him and Warrio Amadeh, don't seem
59:02
to be able to say out loud
59:04
what the fucking thing does. And that's
59:06
because I don't think that they use
59:08
it either. Like I genuinely, I'm beginning
59:10
to wonder if any of the people
59:12
injecting AI. Sure, Sam Altman and Dario
59:14
probably use it. I'm not saying it
59:16
fully, but like, these aren't people, how,
59:18
the next person that meets Sam Altman
59:20
should just be like, hey, how often
59:22
do you use chat cheapity? Gets back
59:24
to that it reminds me of the
59:26
remote work thing all these CEO saying
59:29
guys should come back to the office
59:31
How often you in the office exactly
59:33
and I think that this is just
59:35
the giant revelation of like how many
59:37
people don't actually interact with their businesses
59:39
that don't interact with other people that
59:41
don't really know how anything works But
59:43
they are the ones making the money
59:45
in power decisions. It's fucking crazy to
59:47
me and I don't know how this
59:49
shakes out. It's not going to be
59:51
an autonomous agent doing whatever also okay
59:53
that just occurred to me as well
59:55
How the fuck do these people not
59:57
think these agents come for them first?
59:59
If the AGI was doing this and
1:00:01
they read... this and be like these
1:00:03
people fucking they worked it all out
1:00:05
I need to kill them first Well,
1:00:07
I mean, that kind of gets back
1:00:10
to what you're saying, where it's like,
1:00:12
you know, if we entertain the fan
1:00:14
fiction for a little bit, what is
1:00:16
the frame of mind for these agents
1:00:18
if they're autonomous or not? How are
1:00:20
we thinking of them? Are we thinking
1:00:22
about like, if they're persons or if
1:00:24
they're, you know, the bottomized in some
1:00:26
way? Do they have opinions? You know,
1:00:28
and I think really just gets back
1:00:30
to like, you know, part of the
1:00:32
old hunt for like, you know, you
1:00:34
know, a nice. a polite slave, you
1:00:36
know? How do we figure out how
1:00:38
to reify that relationship? Because it was
1:00:40
quite profitable at the turn of like
1:00:42
industrial capitalism and you know, I think,
1:00:44
you know, it's not a coincidence that
1:00:46
a good chunk of our tech visions
1:00:48
come to us from reactionaries who think
1:00:50
that the problem with capitalism, the problem
1:00:53
with tech development is that a lot
1:00:55
of these empathetic egalitarian reforms get in
1:00:57
the way of profit making. You know,
1:00:59
I think similarly, you know, the hunt
1:01:01
for automatons for certain algorithmic systems is
1:01:03
searching for a way to figure out
1:01:05
how do we replicate, you know, human
1:01:07
labor without the limitations on extracting and
1:01:09
pushing and coercing as much as possible
1:01:11
with, you know, your agent or something
1:01:13
else. And the thing is, yeah, sure,
1:01:15
the idea of an autonomous AI system
1:01:17
would be really useful. I'm sure it
1:01:19
could do stuff. That sounds great. They're
1:01:21
a massive, as you've mentioned, like... Soci
1:01:23
Sociological problems, like, do these things feel
1:01:25
pain? If so, how do I create?
1:01:27
Anyway, but in all seriousness, like, sure,
1:01:29
an autonomous thing that could do all
1:01:31
this thing would be useful. They don't
1:01:34
seem to even speak to that. It's
1:01:36
just like, and then the AI will
1:01:38
make good decisions. And then the decisions
1:01:40
will be even better, then Agent 7
1:01:42
comes out, and you thought Agent 6
1:01:44
was good. It's like they don't even
1:01:46
speak to how we're going to get
1:01:48
to the point where Agent 1 knows
1:01:50
truth from falsehood from falsehood. Of course,
1:01:52
yeah, you know, we just need to
1:01:54
give it all of our data and
1:01:56
everything that we've paid money for required
1:01:58
other people to pay money for. and
1:02:00
then it will finally be perfect. And
1:02:02
it doesn't even make profit of
1:02:05
any kind. That's the other thing.
1:02:07
It's like people saying, well, it
1:02:09
makes profit. It is the profit-seeking.
1:02:12
Is it profit-seeking? It doesn't seem
1:02:14
like we've sought much profit
1:02:16
or any. That's also, I
1:02:18
think, a good point of
1:02:20
comparison to Uber. These companies
1:02:22
that achieved massive scale and
1:02:25
popularity, by making their products
1:02:27
purposefully unprofitable by... charging you $5
1:02:29
for a 30-minute Uber across town so
1:02:31
that you're like, yeah, this is going
1:02:33
to be part of my daily routine.
1:02:35
And the only way they've been able
1:02:37
to squeeze out a little bit of
1:02:39
profit right now is by hiking those
1:02:41
prices up, but trying to balance it
1:02:43
to where they don't hike it up
1:02:45
so much that people don't use it
1:02:47
anymore. And AI is at the point
1:02:49
where for these agents, I think some
1:02:52
of the cost are all something like
1:02:54
thousands of dollars a month. And they
1:02:56
don't work already. you're still not making
1:02:58
money by charging people that much money to
1:03:00
use it. What is the use case one
1:03:02
where this even works? And if it somehow
1:03:04
did manage to work, how much is that
1:03:06
going to cost? Who is going to be
1:03:08
paying $20,000 a month for one of these
1:03:10
things? And how much of that is dependent
1:03:12
on what is clearly nakedly subsidized compute
1:03:15
prices? How much of this is because
1:03:17
Microsoft's not making a profit on a
1:03:19
zero compute? Open AI isn't making anthropicism.
1:03:21
What happens if they need to they
1:03:23
need to. What if they need to?
1:03:26
They're going to, that's the subprime AI
1:03:28
crisis from last year. It's just, it's,
1:03:30
it's- Well, that's when you get the
1:03:32
venture capitalist insisting that that's why we need
1:03:34
to, you know, do this Air CapX rollout,
1:03:37
because if we build it out, like infrastructure,
1:03:39
then we can actually lower the compute
1:03:41
prices and not subsidize anymore? Yeah, that's a
1:03:43
thing. But that's the other thing. So, the
1:03:46
information reported, the Open, Open AI says
1:03:48
that by 2030, they'll be profitable, they'll be
1:03:50
profitable, they'll be profitable, how, how? And you may
1:03:52
think, what does that mean? And the answer is,
1:03:54
Stargate has data centers. Now, you have to, I
1:03:56
just have one little question. This isn't a knock
1:03:59
on the information, this is... they're reporting what
1:04:01
they've been told, which is fine. A
1:04:03
little question with Open AI though, how,
1:04:05
how does more equal less cost? Because
1:04:08
this thing doesn't scale, they lose money
1:04:10
on every prompt, it doesn't feel like
1:04:12
they'll make any money, in fact they
1:04:15
won't make any money, they'll just have
1:04:17
more of it. And also there's the
1:04:19
other thing of... Data centers are not
1:04:22
fucking weeds, they don't grow in six
1:04:24
weeks, they take three to six years
1:04:26
to be fully done. If Stargate is
1:04:29
done by next year, I will fucking
1:04:31
barbecue up my parjorie's hat and eat
1:04:33
it live on stream. Like, I will,
1:04:35
that's if they're fucking alive next year.
1:04:38
Also, the other thing is, getting back
1:04:40
to 2027 as well, year 2026-2027 is
1:04:42
gonna be real important for everything. 2027
1:04:45
or 2026 is when Warrio Abedabadadee says
1:04:47
that Anthropable be profitable be profitable to
1:04:49
be profitable. That's also when the Stargate
1:04:52
Data Center project will be done in
1:04:54
2026. I think that they may have
1:04:56
all just chosen the same year because
1:04:59
it sounded good and they're going to
1:05:01
get in real trouble next year when
1:05:03
it arrives and they're nowhere near close.
1:05:06
I can't wait until all of those
1:05:08
companies announce. that because of the tariffs,
1:05:10
that they have to delay their timeline
1:05:13
and it's like completely out of their
1:05:15
hands, but no, the tariffs, you understand
1:05:17
the tariffs. I got a full roasted
1:05:20
pig, I'm going to be tailgating. Microsoft
1:05:22
earnings, April 23rd, cannot wait. Yeah, you
1:05:24
should go to like a data center.
1:05:27
No, no, no, no. You have a
1:05:29
marching band, all the end of severance.
1:05:31
Yeah, it's, but that's the thing. Like,
1:05:33
I actually agree. I think that they're
1:05:36
going to, there's going to be difficult
1:05:38
choices. Sadly, there's only really two. One,
1:05:40
CapX reduction, two layoffs. Like why are
1:05:43
we doing this? It just, it feels
1:05:45
like the collapse of any good or
1:05:47
bad romantic relationship, where just one party
1:05:50
is doing shit that they think works
1:05:52
from years ago and the other party
1:05:54
is just deeply unhappy and then disappears
1:05:57
one day. and the other party being
1:05:59
just happened. I watched an episode last
1:06:01
last night and it just happened. This
1:06:04
is, uh... Lost is a far more
1:06:06
logical show than any of this AI
1:06:08
bullshit, but it's... Let's not get that
1:06:11
crazy. No, no, it's a bad show.
1:06:13
It's about... No, I wouldn't say that
1:06:15
either. Talking about something that's very long,
1:06:18
very expensive and never had a plan,
1:06:20
but everyone talks about like it was
1:06:22
good, despite and never proving it, lost.
1:06:25
Yeah, sorry I have some I really
1:06:27
do have some feelings on that for
1:06:29
you're gonna get you're gonna get some
1:06:31
emails I I'm sure I email for
1:06:34
me Yeah He's texting me it is
1:06:36
writing and it is just sending me
1:06:38
the very quiet right crystal emotion Emogy
1:06:41
like a hundred times. Yeah, it's I
1:06:43
think it's just I can't wait to
1:06:45
see how people react to this stuff
1:06:48
as well when this because I obviously
1:06:50
will look very silly of these companies
1:06:52
stay alive and somehow make a KGI
1:06:55
Like the gravedigger AI truck is going
1:06:57
to run me over outside my house,
1:06:59
it's going to be great. But I
1:07:02
can't wait to see how people explain
1:07:04
this. I can't wait to see what
1:07:06
the ex, it's like, are we never,
1:07:09
the tariffs maybe? Right, and I talked
1:07:11
to an analyst just last week who's
1:07:13
like a bullish AI tech investor and
1:07:16
he said, he said already you're seeing
1:07:18
investment pull back because of expectations in
1:07:20
the market that there was... these stocks
1:07:23
were overbought in the first place and
1:07:25
now there's all this other turmoil external
1:07:27
macro elements that are going to kind
1:07:29
of take the you know the jargon
1:07:32
of like the froth out of the
1:07:34
market they're gonna it's all gonna deflate
1:07:36
a little bit and so I was
1:07:39
asking him like is the AI bubble
1:07:41
popping and he says no but tariffs
1:07:43
are definitely like deflating it and and
1:07:46
delaying whatever progress that we are going
1:07:48
to be promised from these companies is
1:07:50
going to be delayed. even if it
1:07:53
was going to be delayed they were
1:07:55
going to find other reasons this is
1:07:57
a convenient macro kind of excuse to
1:08:00
just say like oh well we need
1:08:02
we didn't have enough chips we didn't
1:08:04
have enough investing we didn't have enough
1:08:07
you know, be patient with us. We're
1:08:09
going to have the revolution is coming.
1:08:11
What's great is well, talking of my
1:08:14
favorite Wall Street analyst, Jim Kramer of
1:08:16
CNBC. So Corweaves IPO went out. I
1:08:18
just need to mention we are definitely
1:08:21
in the hype cycle because Jim Kramer
1:08:23
said that he would sue an analyst,
1:08:25
DA Davidson, on behalf of Invidia, for
1:08:27
claiming that they were a lazy Susan,
1:08:30
as in... As in, basically what the
1:08:32
argument is, is the invidia-funded coreweave, so
1:08:34
the coreweave would buy GPUs, and at
1:08:37
that point, coreweave would then take out
1:08:39
loans on those GPUs for cap-ex reasons,
1:08:41
cap-ex, including buying GPUs. So very clearly,
1:08:44
and also you tack gill over at
1:08:46
the A. Davison, you and me, Kramer,
1:08:48
in the ring. But we know we're
1:08:51
in the crazy time when you've got
1:08:53
like a TV show host being like
1:08:55
a TV show host being like a
1:08:58
TV show. I think that we're going
1:09:00
to see a historic washout of people
1:09:02
and the way to change things is
1:09:05
this time we need to make fun
1:09:07
of them. I think we need to
1:09:09
be like actively, we don't need to
1:09:12
be mean, that's my job, but we
1:09:14
can be like, to your point, your
1:09:16
article. Allison, it's like, say like, hey
1:09:19
look, no, what you are saying is
1:09:21
not even rational or even connected to
1:09:23
reality. This is not doing the right
1:09:25
things. Apple intelligence is like the greatest...
1:09:28
Anti-A-A-A-I radicalization ever. I actually think Tim...
1:09:30
It's so bad. It's so fucking bad.
1:09:32
And I, before it even came out,
1:09:35
I like downloaded the beta, I was
1:09:37
like, I'm going to test this out
1:09:39
because, you know, I talk about this
1:09:42
thing on my podcast sometimes, and it's
1:09:44
so bad. It's so bad. I like
1:09:46
turns it off for most things, but
1:09:49
I have it on for a couple
1:09:51
of social networks. And I mean, I
1:09:53
guess with the most recent recent recent
1:09:56
update. I check they didn't that person
1:09:58
didn't even like the ski. I don't
1:10:00
know where that name came from and
1:10:03
this happens like every other day. It's
1:10:05
just completely wrong. I'm like how? My
1:10:07
favorite is the summary text for Uber.
1:10:10
where it's like several cars headed to
1:10:12
your location. It's great as well because
1:10:14
I usually don't buy into the Steve
1:10:17
Jobs would burst from his grave thing.
1:10:19
I actually think numerous choices Tim Cook
1:10:21
has made have been. way smart on
1:10:23
how jobs would run. This is actually
1:10:26
like he's going to burst out of
1:10:28
the ground, thriller style, actually did, was
1:10:30
that zombies pop out? Anyway, because it's
1:10:33
nakedly bad, close, it's not a great
1:10:35
reference, but it's nakedly bad, like it
1:10:37
sucks, and I've never, I've, people in
1:10:40
my life were non-techie, constantly be like,
1:10:42
hey, what is Apple intelligence? Am I
1:10:44
missing something? I'm like, no, it's actually
1:10:47
as bad as you think. word and
1:10:49
I'm trying to sometimes might want to
1:10:51
use fine definition or any of the
1:10:54
things that come up I have to
1:10:56
scroll by like seven different new options
1:10:58
under the like right click or double
1:11:01
click thing. And if you hit writing
1:11:03
tools it opens up a screen white
1:11:05
thing. Yes, it opens up a thing
1:11:08
and I'm like who has ever who
1:11:10
is trying to use this to rewrite
1:11:12
a text to their group chat? Who
1:11:15
is this for? I feel like Apple
1:11:17
to its credit is recognizing its mistake
1:11:19
and it's clawing it back and like
1:11:21
delaying Syria indefinitely. I mean... I don't
1:11:24
know if I agree on that one.
1:11:26
That's fair. Because the thing they're delaying
1:11:28
is the thing that everyone wanted. I
1:11:31
think they can't make it work because
1:11:33
the thing they're delaying is never existed.
1:11:35
Yeah. We'll see. Apple's washed. I mean,
1:11:38
but that's the thing. It's the most
1:11:40
brand conscious company on the planet. And
1:11:42
I wrote like when they did their
1:11:45
June revelation of the Syria I was
1:11:47
going to come out and they said
1:11:49
it was going to come out in
1:11:52
the fall and then it was coming
1:11:54
out in the spring and now it's
1:11:56
not coming out ever, question mark. about
1:11:59
the whole like two-hour presentation. The letters
1:12:01
AI were never spoken, artificial was never
1:12:03
spoken, it was Apple intelligence. We're doing
1:12:06
this, we're doing our own thing, it's
1:12:08
not, you know, because they already understood
1:12:10
that when you say something is like,
1:12:13
that looks like it was generated by
1:12:15
AI, you're saying it looks like shit,
1:12:17
you know. And the suggestions are also
1:12:19
really bad too. I've had like over
1:12:22
the last few weeks a few people
1:12:24
give me some bad news from their
1:12:26
lives and the responses it gives are
1:12:29
really funny. Oh no. It'd be like
1:12:31
someone telling me something bad happened and
1:12:33
it's like, oh, or like I'm like,
1:12:36
what was the worst one I had?
1:12:38
It was like, that sounds difficult and
1:12:40
it's like, like, any juice to it,
1:12:43
like. I didn't read too long. Those
1:12:45
would be funny suggestions. But it's proof
1:12:47
that I think that these large language
1:12:50
models don't actually, well they don't understand
1:12:52
anything, they don't know, I think they're
1:12:54
not conscious. But it's like, they're really
1:12:57
bad at understanding words. Like people like,
1:12:59
oh, they make some mistakes. They're bad
1:13:01
at basic contextual stuff. And we had
1:13:04
Victoria song from The Virgin on the
1:13:06
other day, and she was talking about
1:13:08
high context and low context languages languages.
1:13:11
I can only speak English. I imagine,
1:13:13
not being able to read or speak
1:13:15
in any others, that it really fumbles
1:13:17
those. And if you're listening, you want
1:13:20
to email me anything about this research,
1:13:22
how the fuck does this even scale
1:13:24
if it can't, oh we're replacing translators,
1:13:27
great, you're replacing translators with things that
1:13:29
sometimes translate, right? Sometimes? It just feels,
1:13:31
it also inherently, like that feels like
1:13:34
an actual. alignment problem by the way
1:13:36
that right there that feels like an
1:13:38
actual safety problem okay if we're relying
1:13:41
on something to translate and it translates
1:13:43
words wrong and you know especially in
1:13:45
other languages subtle differences can change everything
1:13:48
maybe that's dangerous no no no we
1:13:50
got the computer wake up in like
1:13:52
two weeks and then it's gonna be
1:13:55
And that's the other thing. We're going
1:13:57
to make AGI and we think it's
1:13:59
not going to be pissed off at
1:14:02
us. I don't mean Rokoko's modern basilisk
1:14:04
or whatever. I mean just like if
1:14:06
it wakes up and looks at the
1:14:09
world and goes, these fucking morons, like
1:14:11
you need to watch person of interest
1:14:13
if you haven't one of the best
1:14:16
shows on actually on AGI, like genuinely
1:14:18
you need to watch person of interest
1:14:20
because you will see how that could
1:14:22
happen when you allow a quote perfect
1:14:25
computer to be in particularly good at...
1:14:27
decision-making? I don't know. I feel like
1:14:29
so much of this revolution, quote-unquote, is
1:14:32
based on just the assumption that the
1:14:34
computer makes great decisions and it oftentimes
1:14:36
doesn't. It often does not. Why would
1:14:39
I think that the same search function
1:14:41
in Apple that cannot find a document
1:14:43
that I know what the name is
1:14:46
and I'm searching for it? Why would
1:14:48
I think that that same computer is
1:14:50
going to be able to make wise
1:14:53
decisions about my life finance and personal
1:14:55
relationships? Because that's Apple and this is
1:14:57
AI. Oh, that's, that's, I'll show myself
1:15:00
out. I don't know how much is
1:15:02
AI versus just like a good translation
1:15:04
app. Like I genuinely don't know like
1:15:07
well it's because AI is such a
1:15:09
squishy term that we really don't like
1:15:11
right in some way like I guess
1:15:14
AI could be expanded to include a
1:15:16
lot of modern computing like I can
1:15:18
see travel in like emergency situations where
1:15:20
you need where like a good AI
1:15:23
translator would be like a real lifesaver
1:15:25
just as a small aside I was
1:15:27
just in Mexico and my stuff kids
1:15:30
were using Google Translate and it were
1:15:32
like kind of remembering Spanish and you
1:15:34
know blah blah blah. Go into a
1:15:37
coffee shop and I wanted to order
1:15:39
a flat white and so I used
1:15:41
Google Translate to say like how would
1:15:44
you order a flat white in Spanish
1:15:46
and it said to order a Blanco
1:15:48
Plano which means flat white but like
1:15:51
across Mexico City there are wonderful coffee
1:15:53
shops and you know what they call
1:15:55
them? Flat whites. and Australian coffee or
1:15:58
something? I learned that very quickly with
1:16:00
the help of Reddit, because I went
1:16:02
to the barista and ordered a Blanca
1:16:05
Plano and they were like, who are
1:16:07
you? You crazy Grinko can have here.
1:16:09
Sorry, I speak English. Yeah. Yeah. I
1:16:12
mean, the functionality is very limited on
1:16:14
those things. And it's just like, also,
1:16:16
it gets back to like, if it's
1:16:18
100% reliable, it's great. If it's 98%
1:16:21
reliable, it sucks. And, um, just as
1:16:23
an aside, did any of you hear
1:16:25
about Joan, like, the latest, like, quasi-fraudulent
1:16:28
thing with Joanie Ive that's happening? I
1:16:30
just saw the headline. So, Sam Altman
1:16:32
and Joanie Ive founded a hardware star
1:16:35
of last year. No, actually, I do
1:16:37
know. That has built nothing. There is
1:16:39
a thing they claim there's a phone
1:16:42
without a screen. Sick. And Open AI,
1:16:44
a company run by Sam. for instance,
1:16:46
by Sam Altman and Microsoft, is going
1:16:49
to buy for half a billion dollars
1:16:51
this company that has built nothing, co-founded
1:16:53
by Sam Altman. Sick. I feel like
1:16:56
there should be a one law against
1:16:58
this, but it's just like, what have
1:17:00
they been doing? And this is just,
1:17:03
it's kind of cliched to say like,
1:17:05
quote the big short, but like a
1:17:07
big part of the beginning of that
1:17:10
movie is talking about the increase in
1:17:12
fraud and fraud and scams. And it
1:17:14
really feels like we're getting there. Rest
1:17:16
in piss, you won't be missed. Motherfuckers,
1:17:19
two management consultants, both like in dignity,
1:17:21
each shit. Jesse Lou, Jesse Lou, rabbit,
1:17:23
oh one, you're next, motherfucker. When your
1:17:26
shit's gone, I'll be honking and laughing.
1:17:28
Your customers should suit you. The description,
1:17:30
so my colleagues, the information reported this
1:17:33
joining I of Sam Altman News, and
1:17:35
the description for the device really makes
1:17:37
me chuckle. Designs for the AI device
1:17:40
are still early and haven't been finalized,
1:17:42
the people said. Potential designs include a
1:17:44
quote-unquote phone without a screen, and AI-enabled
1:17:47
household devices. Others. close to the project
1:17:49
are adamant that it is, quote, not
1:17:51
a phone, end quote. And they're going
1:17:54
to spend, they've discussed spending upwards of
1:17:56
500 million on this company. This is
1:17:58
like a bad philosophy class where it's
1:18:01
like, what is a phone that's not
1:18:03
a phone? Lipot and phone. Semiotics for
1:18:05
beginners. Jesus, fucking Christ. Oh my God.
1:18:08
And that's, like, that's the thing as
1:18:10
well, like, this feels like a thing
1:18:12
the tech media needs to be on.
1:18:14
Just someone needs to say, I'll be
1:18:17
saying it. Boardering on fraud, like it
1:18:19
seems like it must be legal because
1:18:21
otherwise there would be some sort of
1:18:24
authority, right? You can't do anything illegal
1:18:26
without anything happening. Hmm. But it's like,
1:18:28
this is one of the most egregious
1:18:31
fucking things I've ever seen. This is
1:18:33
a guy handing himself money. One hand,
1:18:35
this is, should be fraud. Like this,
1:18:38
how is this ethical? And everyone's just
1:18:40
like, oh yeah, you know. This, Kevin
1:18:42
Roose. Maybe you should get on this
1:18:45
shit, find out what the phone that
1:18:47
isn't the phone is. What the fuck?
1:18:49
And also household appliances with AI. Maybe
1:18:52
like something with the screen and the
1:18:54
speaker that you could say, like a
1:18:56
word to it and it would wake
1:18:59
up and play music. Like a rumba.
1:19:01
A rumba? A rumba with AI mic.
1:19:03
Just declared bankruptcy. D. J. Rumba dead?
1:19:06
Rumba dead? Why rumba dead? I think
1:19:08
they did, I don't know actually, I
1:19:10
remember, I read the headline in the
1:19:12
last few weeks. They were supposed to
1:19:15
be acquired by Amazon, but I think
1:19:17
the deal fell through under Lina Khan's
1:19:19
FTC, I'd assume? Sick. Got them. Hmm.
1:19:22
Also a one quick note on the
1:19:24
joining I have Sam Multan thing. I
1:19:26
guess it's notable that Aldman has been
1:19:29
working closely with the product but is
1:19:31
not a co-founder and whether he has
1:19:33
an economic stake in the hardware project
1:19:36
is unclear. Yeah, you know, he just
1:19:38
seems to be working closely with it.
1:19:40
He just ends up with a room.
1:19:43
He's just hanging out there on taking
1:19:45
a salary and an equity position. I
1:19:47
do think it's very interesting. All of
1:19:50
these different AI device startups that have
1:19:52
popped up in the last couple of
1:19:54
years. question for them is always just
1:19:57
like To what end? Yeah. People didn't
1:19:59
like Amazon Alexa. And it also lost
1:20:01
a shoot ton of money. Yeah. And
1:20:04
Amazon's still trying to make it work.
1:20:06
Series never been super popular. And I
1:20:08
just don't, like, one of my co-hosts
1:20:10
on the podcast for intelligent machines is
1:20:13
obsessed with all these devices, just because
1:20:15
he's like one of those tech guys.
1:20:17
Leo, yes. He's, and we love to
1:20:20
make fun of it. He his latest
1:20:22
device is this thing called a B.
1:20:24
We just have Victoria's song on talking
1:20:27
about this the thing that records everything
1:20:29
all the time and then Makes puts
1:20:31
that up in the cloud and then
1:20:34
I guess doesn't store the full transcripts
1:20:36
But does store a little AI-generated description
1:20:38
of everything you did and whoever you
1:20:41
talked to that day and there's no
1:20:43
way I mean he's in California. Which
1:20:45
is not a one-party recording. Yeah everybody
1:20:48
to record and the B is not
1:20:50
doing that but it's just baffling to
1:20:52
me because I'm just like I guess
1:20:55
he's like well it could be nice
1:20:57
to have record of all of my
1:20:59
days all the time and I'm like
1:21:02
I guess but to what end just
1:21:04
record it you go record it down
1:21:06
yeah right it down there's literally a
1:21:08
black mirror episode about that I believe
1:21:11
it's the first episode of black mirror
1:21:13
everyone has like a recording device and
1:21:15
then it does when I was when
1:21:18
you were talking about this on the
1:21:20
show I was listening thinking like in
1:21:22
this black mirror thing it reminded me
1:21:25
that like when Facebook started having like
1:21:27
all your photos collected under your photos
1:21:29
and like how we started reliving so
1:21:32
many experiences on mine like you could
1:21:34
scroll back and like look at how
1:21:36
happy you were like six years ago
1:21:39
you know like and it creates this
1:21:41
like cycle like imagine if every interaction
1:21:43
every like romantic interaction every sad inner
1:21:46
everything you could replay back to yourself.
1:21:48
It's sad. like a nightmare to me.
1:21:50
I do think it's also just a
1:21:53
night, like humans, we're not built socially
1:21:55
to exist in a world where every
1:21:57
interaction is recorded and searchable with everyone
1:22:00
forever. Like you would never have a
1:22:02
single friend. Romantic relationships would dissolve. The
1:22:04
eternal sunshine on the spotless line, like,
1:22:06
maybe, frame, but even then, like, memory
1:22:09
is vastly different to the experience of
1:22:11
collecting it. Like just existing, like we
1:22:13
are brain slot, I don't know, my
1:22:16
brain just goes everywhere. But like compared
1:22:18
to memory which can be oddly crystalline
1:22:20
and wrong, you can just remember something,
1:22:23
you can remember a subtle detail wrong,
1:22:25
or you can just fill in the
1:22:27
gaps, memory sucks. Also doesn't having like
1:22:30
a device that constantly records everything? a
1:22:32
road at the impulse or maybe the
1:22:34
drive to be as present. You know,
1:22:37
because you're like, well, it's going to
1:22:39
refer to it. But this is also
1:22:41
got huge privacy implications where suddenly the
1:22:44
cops could just be like, yeah, we're
1:22:46
just going to take a recording, we're
1:22:48
just going to subpoena everybody who was
1:22:51
in this area's B device, and then
1:22:53
suddenly get a recording of everyone's days
1:22:55
ever that was just happened to be
1:22:58
in this place because we think a
1:23:00
crime could have happened there. But I
1:23:02
think that there's an overarching thing to
1:23:04
everything we're talking about which is these
1:23:07
are products made by people that haven't
1:23:09
made anything useful in a while and
1:23:11
everything is being funded based on what
1:23:14
used to work. What used to work
1:23:16
was you make a thing, people buy
1:23:18
it and then you sell it to
1:23:21
someone else to take it public. This
1:23:23
only worked until about 2015. It's not
1:23:25
just a zero interest free everything. We
1:23:28
have increasingly taken away. the creation of
1:23:30
anything valuable in tech from people who
1:23:32
experience real life. Like our biggest CEOs
1:23:35
are Sam Altman, Wario Amadee, Sundar Pishai,
1:23:37
MBA, former McKinsey, Sachin Adele of MBA.
1:23:39
I mean, Tim Cook, MBA. Like these
1:23:42
are all people that don't really interact
1:23:44
with people anymore and the problems, the
1:23:46
people in power are not engineers, they're
1:23:49
not... Even startup founders anymore, they're fucking
1:23:51
business people. Making things that they think
1:23:53
they could sell, things that could grow
1:23:56
the right economy, of course. And we're
1:23:58
at the kind of the pornographic point
1:24:00
where it's like a guy being like
1:24:02
what could a what is a what
1:24:05
is a I do you can just
1:24:07
throw a bunch of data and give
1:24:09
you insights well what if we just
1:24:12
collected date on everything happening around us
1:24:14
ever that would be good then you
1:24:16
could reflect on things that's what people
1:24:19
do right and I actually genuinely think
1:24:21
there was only one question to us
1:24:23
to be found on that so are
1:24:26
you wearing one of these now and
1:24:28
how often do you use this because
1:24:30
like if they use it all the
1:24:33
kind of respect them I guarantee they
1:24:35
don't. I guarantee they don't, and they'll
1:24:37
probably say something called around a lot
1:24:40
of privileged information, as opposed to everyone
1:24:42
else who's not important. And there's fucking
1:24:44
Joanie, oh, it's going to be a
1:24:47
phone without a screen. What can you
1:24:49
do with it? I don't know. I
1:24:51
haven't thought that far ahead. I only
1:24:54
get paid $15 million a year to
1:24:56
do this. Question is also, who wants
1:24:58
a phone about a screen? The screen's
1:25:00
the best part. I love the screen.
1:25:03
I love to hate- The screen. I
1:25:05
love to hate to hate to hate
1:25:07
to hate to hate to hate to
1:25:10
hate to hate to hate- the like
1:25:12
they have friends who are all like
1:25:14
have fifty million dollars in the bank
1:25:17
account at any time they just like
1:25:19
exist in a different difficulty level they're
1:25:21
all going at very easy they don't
1:25:24
really have like predators of any kind
1:25:26
they don't really have experiences so they're
1:25:28
what they experience in life is when
1:25:31
you have to work out what you
1:25:33
enjoy and because they enjoy nothing all
1:25:35
they can do is come up with
1:25:38
ideas that's why the rabbit are one
1:25:40
oh what do people do uh uh...
1:25:42
autumn McDonald's can it do it do
1:25:45
it not really But it also could
1:25:47
take a photo of something, it could
1:25:49
be pixelated. That could all. You could
1:25:52
kind of order an Uber through it,
1:25:54
maybe. What was great was the rabbit
1:25:56
launch, the rabbit launch, and he tried
1:25:58
to order McDonald's live, and it just
1:26:01
didn't work. Took like five minutes to
1:26:03
fail. And that's the thing, like, I
1:26:05
feel like when this hype cycle ends,
1:26:08
the tech media needs to just be
1:26:10
aggressively like, we can ask the questions
1:26:12
I was asking in 2021 where it's
1:26:15
like, what does this do? Who is
1:26:17
it for? And if anyone says it.
1:26:19
could address millions of papers. Like, have
1:26:22
you talked to one of them, motherfucker?
1:26:24
One of them. Well, I think we
1:26:26
can wrap it up there, though. I
1:26:29
think, Allison, where can people find you?
1:26:31
You can find me at cnn.com/Nightcap. I
1:26:33
write the CNN business Nightcap. It's in
1:26:36
your inbox four nights a week. Oh,
1:26:38
yeah. Ed? I write a newsletter on
1:26:40
Substack Tech Bubble. I co-host podcast This
1:26:43
Machine Machine Machine Machine Kills. You know,
1:26:45
Blue Sky to, right? Yeah, Blue Sky
1:26:47
at Edward Unguesso, junior.com. You can read
1:26:50
my work at the information. I also...
1:26:52
host a podcast called Intelligent Machines and
1:26:54
you can find me on Twitter at
1:26:57
Paris Martino or on Blue Sky at
1:26:59
Paris.NYc. And you can find me at
1:27:01
at edzitron.com on Blue Sky Google who
1:27:03
destroyed Google search click the first link
1:27:06
it's me I destroyed Google search click
1:27:08
the first link it's me I destroyed
1:27:10
Google search along with pobago ragabam fuck
1:27:13
you dude if you want to support
1:27:15
this podcast you should go to the
1:27:17
webbies I will be putting the link
1:27:20
in there I need your help I've
1:27:22
never won an award in my award
1:27:24
in my life We are winning right
1:27:27
now, please help me, please help me
1:27:29
win this, and if I need to
1:27:31
incentivize you further, we are beating Scott
1:27:34
Galloway. If you want to defeat Scott
1:27:36
Galloway, you need to vote on this.
1:27:38
Thank you so much for listening everyone.
1:27:41
Thank you for listening to Better Offline.
1:27:43
The editor and composer of the Better
1:27:45
Offline theme song is Matosowski. You can
1:27:48
check out more of his music and
1:27:50
audio projects at matosowski.com. m-a-t-t-o-s-s-o-w-s-s-k-i.com. You can
1:27:52
email me at easy at betrooffline.com or
1:27:55
visit betrooffline.com to find more podcast links
1:27:57
and of course my newsletter. Thank you
1:27:59
so much for listening. Better offline is
1:28:01
a production of Cool Zone Media. For
1:28:04
more from Cool Zone Media, visit our
1:28:06
website, coolzonemedia.com, or check us out on
1:28:08
the I heart radio app, Apple Podcast,
1:28:11
or wherever you get your podcast. Military
1:28:35
veterans and active duty service
1:28:37
members. You could become a
1:28:39
certified cyber warrior with just
1:28:41
12 weeks of training at
1:28:43
my computer career. Cybersecurity specialists
1:28:46
are in high demand. And
1:28:48
IT pros with these skills
1:28:50
often enjoy abundant opportunity and
1:28:52
a great lifestyle. Protect our
1:28:54
people, liberty, and treasured institutions
1:28:56
from cyber threats. Skillbridge and
1:28:58
other VA benefits are available
1:29:00
to those who qualify. Learn
1:29:02
more at my computercareer dot
1:29:04
EDU slash CWP. In
1:29:06
a world of economic uncertainty and workplace
1:29:08
transformation, learn to lead by example from
1:29:11
visionary C-sweet executives like Shannon Skyler of
1:29:13
PWC and Will Pearson of Ihart Media.
1:29:15
The good teacher explains, the great teacher
1:29:17
inspires. Don't always leave your team to
1:29:20
do the work. That's been the most
1:29:22
important part of how to lead by
1:29:24
example. Listen to Leading by Example, executives
1:29:26
making an impact on the I heart
1:29:29
radio app, Apple Podcasts, or wherever you
1:29:31
get your podcasts. Hi, I'm Morgan Sun,
1:29:33
host of Close All tabs from KQEDD,
1:29:35
where every week we reveal how the
1:29:38
online world collides with everyday life. You
1:29:40
don't know what's true or not, because
1:29:42
you don't know if AI was involved
1:29:44
in it. So my first reaction was,
1:29:47
ha ha ha, this is so funny.
1:29:49
And my next reaction was, wait a
1:29:51
minute, I'm a minute. is this
1:29:53
is this I think we will
1:29:56
we will see
1:29:58
it to president maybe maybe
1:30:00
within our lifetimes.
1:30:02
You can find Close All
1:30:05
tabs you listen to
1:30:07
podcasts. podcasts.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More