Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
News never stops, and neither
0:02
does business. Advanced solutions from
0:04
Comcast business help turn today's
0:07
enterprises into engines of modern
0:09
business. It's how financial firms
0:11
turn into market tracking, client
0:14
servicing, cyber securing economic engines.
0:16
With leading connectivity and networking,
0:19
advanced cybersecurity and expert partnership,
0:21
Comcast business is powering the
0:23
engine of modern business, powering
0:26
possibilities. Restrictions apply. This week I
0:28
checked my credit card bill, normally
0:30
a pretty boring process in my
0:32
life, and I see a number
0:34
that astonishes me in its size
0:36
and gravity. And so the first
0:38
thing I think is, how much
0:40
Dordash is it possible to eat
0:42
in one month? Have I hit
0:44
some new level of depravity? But
0:46
then I fought. I go through
0:48
the statement. and I find a
0:50
charge that is from the heating
0:52
and plumbing company that I used
0:55
to use when I lived in
0:57
the home of Karas Wisher. Karas
0:59
Wisher, of course, the iconic technology
1:01
journalist, friend, and mentor, originator of
1:03
the very podcast feed that we're
1:05
on today. And former landlord of
1:07
Casey Newton. Former landlord of me,
1:09
and when I investigated, it turned
1:11
out that Karas Wisher had charged
1:13
my credit card for $18,000. For
1:15
what? What costs $18,000? I don't
1:17
know what is going on, but
1:19
it costs $18,000 to fix. And
1:21
until I made a few phone
1:23
calls yesterday, that was going to
1:25
be my problem. So here's what
1:27
I want to say to the
1:29
people of America. You need to
1:31
watch these landlords. You might think
1:33
that you're out from underneath their
1:35
thumb, but they will still come
1:37
for you, and they will put
1:40
$18,000 on your credit card if
1:42
you do not watch them. Well,
1:44
and I should say, basically it
1:46
was on file with the heating
1:48
and plumbing company. So I'm not
1:50
sure that I could actually blame
1:52
Kara for this, but I did
1:54
have to talk to her about
1:56
it. Oh, she's crafty. I think
1:58
she knew. She was doing. She's
2:00
been waiting to get back at
2:02
us like this for a long
2:04
time. And Mission Accomplish Garrett. I'm
2:06
Kevin Bruce, a tech columnist at the
2:08
New York Times. I'm Casey Newton from
2:11
Platformer. And this is hard for.
2:13
This week, the group chat that's
2:15
rocking the government will tell you
2:17
why the government turned a signal
2:20
for planning military operations and why
2:22
it's probably not a great idea.
2:24
Then, podcastor Dorcas Patel stops
2:26
by to discuss his new book.
2:29
and tell us why he still
2:31
believes AI will benefit all of
2:33
us in the end. And finally,
2:35
we asked you if AI was
2:38
making you dumber. It's time to
2:40
share what you all told
2:42
us. I feel smarter already.
2:44
Well, Casey, the big story
2:46
of the week is signal gate.
2:48
Yes, I would say Kevin is
2:50
the group chats are popping off
2:52
at the highest levels of government.
2:54
Yes, and if you have been
2:56
hiding under a rock or on
2:58
a silent meditation retreat or something
3:00
for the last few days, let's
3:03
just quickly catch you up on
3:05
what has been going on. So
3:07
on Monday, the Atlantic, and specifically
3:09
Jeffrey Goldberg, the editor-in-chief of
3:11
the Atlantic, published an article
3:13
titled, The Trump administration, accidentally
3:15
texted me, it's... war plans.
3:17
In this article, Goldberg details
3:19
his experience of being added,
3:21
seemingly inadvertently, to a signal
3:23
group chat with 18 of
3:25
the U.S.'s' most senior national
3:28
security leaders, including Secretary of
3:30
State Marco Rubio, Tulsi Gabrod,
3:32
the Director of National Intelligence,
3:34
Pete Hexeth, the Secretary of
3:36
Defense, and even Vice President
3:38
Jady Vance. The chat was
3:40
called Huthi PC small group, presumably
3:42
standing for Principal's Committee and
3:45
not personal... computer. And I
3:47
would say this story lit
3:49
the internet on fire. Absolutely.
3:51
You know, we have secure
3:53
communications channels that we use
3:55
in this country, Kevin, to
3:57
sort of organize and plan
3:59
for military operations. They were not
4:02
used in this case. That is kind
4:04
of a big deal in its own
4:06
right. But to accidentally add one of
4:08
the more prominent journalists in all of
4:11
America to this group chat as you're
4:13
planning it is truly unprecedented in the
4:15
history of this country. Yeah, and unprecedented
4:18
in my life too. Like I never
4:20
get invited to any secret classified group
4:22
chats, but I also feel like. There
4:24
is an etiquette and a procedure around
4:27
the mid-sized group chat. Yes. So I'm
4:29
sure you've had the experience of being
4:31
added to a group chat. In my
4:34
case, it's usually like planning a birthday
4:36
party or something. And there's always like
4:38
a number or two on this group
4:40
chat that... you don't have stored in
4:43
your phone. Right? That's right. The unfamiliar
4:45
area code pops up along with the
4:47
named accounts of everyone who you do
4:50
know who's in this group chat. Absolutely.
4:52
And the first thing that I do
4:54
when that happens to me is I
4:56
try to figure out who the unnamed
4:59
people in the group chat are. Yes.
5:01
And until you figure that out. You
5:03
can't be given your A material to
5:05
the group chat. This is so true.
5:08
You know, I saw someone say on
5:10
social media this week that gay group
5:12
chats have so much better operational security
5:15
than the national security advisor does. And
5:17
this is the exact reason. If you're
5:19
going to be in a group with
5:21
seven or eight people and there's one
5:24
number that you don't recognize, you're going
5:26
to be very tight-lipped until you find
5:28
it who this interloper is. that most
5:31
people take extremely seriously. Yes. Even when
5:33
they are talking about things like planning
5:35
birthday parties. Yes. And not military strikes.
5:37
Exactly. Yeah. So before we get into
5:40
the tech piece, let's just sort of
5:42
say what has been happening since then.
5:44
So this story comes out on Monday
5:47
in the Atlantic. Everyone freaks out about
5:49
this. The government officials involved in this
5:51
group chat are sort of asked to
5:53
respond to it. There's actually a hearing
5:56
in Congress where several of the members
5:58
of this group chat are questioned about...
6:00
how a reporter got access to these
6:03
sensitive conversations and basically the posture of
6:05
the Trump officials implicated in this has
6:07
been to Deny that this was a
6:09
secret at all. There have been various
6:12
officials saying nothing classified was discussed in
6:14
here. This wasn't an unapproved use of
6:16
a private messaging app. Basically nothing to
6:19
see here folks. Yes, and on Wednesday,
6:21
the Atlantic actually published the full text
6:23
message exchanges so that people can just
6:25
go read these things for themselves and
6:28
see just how detailed the plans shared
6:30
were. Yes. Let's just say it does
6:32
look like some of this information was
6:34
in fact classified. It included details about
6:37
like the specific timing of various airstrikes
6:39
that were being ordered in Yemen against
6:41
the Houthis, which are a sort of
6:44
rebel terrorist militia. Like this was not
6:46
a party planning group chat. Yeah, here's
6:48
a good test for you. When you
6:50
read these chats, imagine you're a hoothee
6:53
in Yemen. Would this information be useful
6:55
to you to avoid being struck by
6:57
a missile? I think it would be.
7:00
To me that's the test here, Kevin.
7:02
Totally So let's dive into the tech
7:04
of it all because I think there
7:06
is actually an important and interesting text
7:09
story kind of beyond the headlines here
7:11
so Casey What is Signal and how
7:13
does it work? Yeah, so Signal, as
7:16
you well know, as a frequent user,
7:18
is an open source, end-to-end, encrypted messaging
7:20
service that has been with us since
7:22
July 2014. It has been growing in
7:25
popularity over the past several years. A
7:27
lot of people like the fact that
7:29
unlike something like an eye message or
7:32
a WhatsApp, this is something that is
7:34
built by a non-profit organization. and it
7:36
is fully open source. It's built on
7:38
an open source protocol so anyone can
7:41
look and see how it is built,
7:43
they can poke holes in it, try
7:45
to make it more secure, and you
7:48
know as the world has evolved more
7:50
and more people have found reasons to
7:52
have both end-to-end encrypted chats and to
7:54
have... disappearing chats. And so signal has
7:57
been sort of part of this move
7:59
away from permanent chats stored forever to
8:01
more ephemeral, more private communications. Yeah. And
8:04
I think we should add that among
8:06
the people who think about cybersecurity, signal
8:08
is. seen as kind of the gold
8:10
standard of encrypted communications apps. It is
8:13
not perfect. No communications platform is ever
8:15
perfectly secure because it is used by
8:17
humans on devices that are not perfectly
8:19
secure. But it is widely regarded as
8:22
as the most secure place to have
8:24
private conversations. Yeah, I mean, and if
8:26
you want to know why is that
8:29
we could go into some level of
8:31
detail here, signal makes up priority to
8:33
collect as little metadata as possible. And
8:35
so for example. the government went to
8:38
them and they said, hey, we have
8:40
like Kevin's signal number. Tell of us
8:42
all the contacts that Kevin has. They
8:45
don't actually know that. They don't store
8:47
that. They also do not store the
8:49
chats themselves, right? Those are on your
8:51
devices. So if the government says, hey,
8:54
give us all of Kevin's chats, they
8:56
don't have those. And there is some
8:58
pretty good encryption and privacy practices in
9:01
some of the other apps that I
9:03
think a lot of our listeners use
9:05
on a daily basis. What's app has
9:07
pretty good protection. I message has pretty
9:10
good protection. But there are a bunch
9:12
of asterisks around that. And so if
9:14
security is super super important to you,
9:17
then I think many of us would
9:19
actually recommend signal as the best place
9:21
to do your communicating. You and I
9:23
both use signal, most reporters I know
9:26
use signal to have sensitive conversations with
9:28
sources. I know that signal has been
9:30
used by government officials in both Democratic
9:33
and Republican administrations for years now. So
9:35
Casey, I guess my first question is
9:37
like, why is this a big deal
9:39
that these high-ranking government officials were using
9:42
signal if it is sort of the
9:44
gold standard of security? Sure. So I
9:46
would put it in... Maybe two sentences
9:49
Kevin that sums this whole thing up.
9:51
Signal is a secure app, but... Using
9:53
signal alone does not make your messages
9:55
secure. So what do I mean by
9:58
that? Well. Despite the fact that signal
10:00
is secure, your device is vulnerable, particularly
10:02
if it's your personal device, if it
10:04
is your iPhone that you bought from
10:07
the Apple store. There is a huge
10:09
industry of hackers out there developing what
10:11
are called zero day exploits. And a
10:14
zero day exploit is essentially an undiscovered
10:16
hack. They are available for sale on
10:18
the black market. They often cost millions
10:20
of dollars and criminals and more. often
10:23
state governments will purchase these attacks because
10:25
they say hey it is so important
10:27
to me to get into Kevin's phone.
10:30
I have to know what he's planning
10:32
for hard work this week so I'm
10:34
gonna spend three million dollars I'm gonna
10:36
find a way to get on to
10:39
his personal device and if I have
10:41
done that even if you're using signal
10:43
it doesn't matter because I'm on your
10:46
device now I can read all of
10:48
your messages right so this is the
10:50
concern so wait what are what are
10:52
what are American military officials supposed to
10:55
do instead well we have special designated
10:57
channels for them to use. We have
10:59
networks that are not the public internet,
11:02
right? We have messaging tools that are
11:04
not commercially available and we have set
11:06
up protocols to make them use those
11:08
protocols to avoid the scenario that I
11:11
just described. Yeah, so let's go into
11:13
that a little bit because as you
11:15
mentioned there are sort of designated communications
11:18
platforms and channels that high-ranking government officials,
11:20
including those with access to classified information,
11:22
are supposed to use, right? There are
11:24
these things called skiffs, the sensitive compartmented
11:27
information facilities. Those are like the physical
11:29
rooms that you can go into to
11:31
receive like classified briefings. Usually you have
11:34
to like keep your phone out of
11:36
those rooms for security. Yeah, I keep
11:38
all of my feelings in a sensitive
11:40
compartmentalized information facility. But you're working on
11:43
that and there. I'm working on that,
11:45
I'm working on it. But if you're
11:47
not, like, physically in the same place
11:49
as the people that you're trying to
11:52
meet with, there are... are these secure
11:54
communication channels? Casey, what are those channels?
11:56
Well, there are just specialized services for
11:59
this. So this is like what a
12:01
lot of the tech giants will work
12:03
on. Microsoft has something called Azure government,
12:05
which is built specifically to handle classified
12:08
data. And this is like sort of
12:10
rarefied air, right? Not that many big
12:12
platforms actually go to the trouble of
12:15
making this software. It's a pretty small,
12:17
addressable market. So you've got to have
12:19
a really like solid product and really
12:21
good sales force to like make this
12:24
worth your while. But the stuff exists.
12:26
And the government has. these services over
12:28
the years and installed them and that
12:31
this is what the military is supposed
12:33
to use. Yeah so I did some
12:35
research on this because I was basically
12:37
trying to figure out like are these
12:40
high-ranking national security and government officials using
12:42
signal because it is the kind of
12:44
easiest and most intuitive thing for them
12:47
to use are they doing it because
12:49
they don't want to use the stuff
12:51
that the government has set up for
12:53
its own employees to communicate like why
12:56
were they sort of doing this because
12:58
one thing that stuck out to me.
13:00
in the transcripts of these group chats,
13:03
is that nobody in the chats seemed
13:05
surprised at all that this was happening
13:07
on signal, right? No one when this
13:09
group was formed and these 18 people
13:12
were added to it, you know, said
13:14
anything about, hey, why are we using
13:16
signal for this? Why aren't we using
13:19
Microsoft Teams or whatever the sort of
13:21
official approved thing is? What I found
13:23
out when I started doing this research
13:25
is that there is something of a
13:28
patchwork of different applications that have been
13:30
cleared for use by various agencies of
13:32
government. And one reason that these high-ranking
13:34
government officials may have been using signal
13:37
instead of these other apps is because
13:39
some of these apps are not designed
13:41
to work across the agencies of government,
13:44
right? The DOD has its own communication
13:46
protocols, maybe the State Department has its
13:48
own communication protocols. Maybe it's not trivial
13:50
easy to kind of start up a
13:53
conversation with a bunch of people from
13:55
various agencies on a single. government-owned and
13:57
controlled tool. Yeah, and that should not
14:00
surprise us because something that is always
14:02
true of secure communications is that it
14:04
is inconvenient and annoying. This is what
14:06
makes it secure, is that you have
14:09
gone to great lengths to conceal what
14:11
you are doing. I read some reporting
14:13
in the Washington Post this week that
14:16
for the most part when they are
14:18
doing their most sensitive communications, those communications
14:20
are supposed to be done in person.
14:22
Right? Like that is the default. And
14:25
if you cannot do it in person,
14:27
then you're supposed to use these
14:29
secure communication channels. Again, not the
14:31
public internet. So that is the
14:33
protocol that was not followed here.
14:35
Right. And I think one other possible explanation
14:37
for why these high ranking officials were
14:39
using signal is that signal allows you
14:41
to create disappearing messages. Right. That is
14:44
a core feature of the signal product
14:46
is that you can set in any
14:48
group chat. Like all these messages are
14:50
going to delete themselves after. an hour
14:53
or a day or a week. In
14:55
this case, they seem to have been
14:57
set to delete after four weeks. Now,
14:59
there are good reasons why you might
15:02
want to do that if you're a
15:04
national security official. You don't want this
15:06
stuff to hang around forever. But
15:08
we should also say that that
15:10
is also an apparent violation of
15:12
the rules for government communication. And
15:14
so one reason. that the government
15:17
and various agencies have their own
15:19
communications channels is because those channels
15:21
can be preserved. to comply with
15:23
these laws about federal record keeping?
15:25
Yes, there is a Federal Records
15:27
Act and a Presidential Records Act,
15:29
and the idea behind those laws,
15:31
Kevin, is that, well, you know,
15:33
if the government is planning a
15:35
massive war campaign that will kill a bunch
15:38
of people, we should have a record of
15:40
that. We do, you know, in a democracy,
15:42
you want there to be a preservation of
15:44
some of the logic behind these attacks that
15:46
the government is making. So, yes, it seems
15:49
like they clearly have just decided they're not
15:51
going to follow those. I think the
15:53
place where I land on
15:55
this is that this is,
15:57
I would say, obviously a
15:59
dumb... probably unforgively dumb mistake on
16:01
the part of a high-ranking national security
16:03
official. My favorite sort of like cover-up
16:06
attempt on this was the national security
16:08
advisor Michael Waltz was asked of how
16:10
this happened because he was the person
16:12
according to these screenshots of this chat
16:14
who added Jeffrey Goldberg from the Atlantic
16:16
to this chat and he basically gave
16:18
this statement that was like we're all
16:21
trying to figure out what happened here.
16:23
We saw the screenshot Michael, you added
16:25
him. I think there are obvious questions
16:27
that raises about whether he had mistaken
16:29
him with someone else named Jeffrey Goldberg,
16:31
maybe a national security official of some
16:34
kind. Oh, I bet the words Jeffrey
16:36
Goldberg never even appeared on Michael Waltz's
16:38
screen. Okay, this is like the realm
16:40
of peer speculation, but let me just
16:42
tell you, as somebody who is routinely
16:44
contacted by people anonymously on signal, usually
16:46
their full name is not in the
16:49
message request. It's like, you have a
16:51
new message request from JG, so just
16:53
those initials. And so I will look
16:55
through my signal chats and I'll be
16:57
trying to, I want to ask that
16:59
one person about the one thing, what
17:01
was their signal name? And I'm looking
17:04
through a soup of initials. So like,
17:06
I actually understand why that happened, which
17:08
is yet one more reason why you
17:10
might not want to use signal to
17:12
do your war planning. Yes, exactly. I
17:14
think the most obvious sort of Occam's
17:17
razor explanation for why all these high-ranking
17:19
officials are on signal is that it's
17:21
just a better and easier and easier
17:23
and more intuitive product than anything the
17:25
government. is supposed to be using for
17:27
this stuff. It's more convenient. Yes, and
17:29
I find this totally plausible having spoken
17:32
with people who have been involved with
17:34
government technology in the past, like it
17:36
is just not. the place where like
17:38
cutting-edge software is developed and deployed. You
17:40
know, there famously was this sort of
17:42
struggle between President Obama and some of
17:44
his security advisors when he wanted to
17:47
use a Blackberry in the Oval Office,
17:49
and there was sort of like no
17:51
precedent for how to do that securely,
17:53
and so he fought them until they
17:55
sort of made him a special Blackberry
17:57
that he could use. Like, this is
17:59
a time-honored struggle between politicians who want
18:02
to use the stuff that they used
18:04
when they were civilians. political office and
18:06
are told again and again like you
18:08
can't do that you have to use
18:10
this clunkier older worst thing instead. Well
18:12
I'm detecting a lot of sympathy in
18:15
your voice for the the Trump administration
18:17
here which is somewhat surprising to me
18:19
because while I can stipulate that sure
18:21
I'm they must go through an annoying
18:23
process in order to plan a war
18:25
I'm somebody who thinks well it probably
18:27
should be really annoying and inconvenient like
18:30
you probably should actually have to like
18:32
go physically attend a meeting to do
18:34
all of this stuff and And you
18:36
know, if we are going to decide
18:38
that war planning is something that like
18:40
the Secretary of Defense can do, like
18:42
during commercials for March Madness, just like
18:45
pecking away on his iPhone, we're going
18:47
to get in a lot of trouble.
18:49
Like, imagine you're an adversary of America
18:51
right now, and you've just found out
18:53
that the entire administration is just chatting
18:55
away on their personal devices. Do you
18:58
not think that they have gone straight
19:00
to the black market and said, what's
19:02
a zero day exploit that we can
19:04
use to get on Pete heads that's
19:06
phone? Of course they have. For sure.
19:08
And so what I'm not saying here
19:10
is that this is excusable behavior. What
19:13
I am saying is that I think
19:15
people, including government officials, will gravitate toward
19:17
something that offers them the right mix
19:19
of convenience and security. I would like
19:21
for this to be an incident that
19:23
kind of spurs the development of much
19:25
better and more secure ways for the
19:28
government to communicate with itself. Like it
19:30
should not be the case that if
19:32
a bunch of high-ranking officials want to
19:34
start a group chat with each other,
19:36
they have to go to this private
19:38
sector app rather than something that the
19:40
government itself owns and controls and that
19:43
can be verifiably secure. So yes, I
19:45
think this was extremely dumb. It is
19:47
also, by the way, something that I'm
19:49
sure was happening in democratic administrations too.
19:51
Like, this is not a partisan issue
19:53
here. Well, what exactly do you think
19:56
was happening? Like, yes, the Democrats were
19:58
using signal, and yes, they were using
20:00
disappearing messages. It's not clear to me
20:02
that they were planning military strikes. I
20:04
don't know. I have no information either
20:06
way on that. What I do know
20:08
is that I have gotten messages on
20:11
signal from officials in both parties, I
20:13
have gotten emails from the personal Gmail
20:15
accounts of administration officials in both parties.
20:17
This is, I think, an open secret
20:19
in Washington, that the government's own tech
20:21
stack is not good, and that a
20:23
lot of people, for reasons of convenience
20:26
or privacy or what have you, have
20:28
chosen to use these less secure private
20:30
sector things instead. I think I should
20:32
make a serious point here, which is
20:34
that it is in the national interest
20:36
of the United States to have a
20:38
smaller gap between the leading commercial technology
20:41
products and the products that the government
20:43
is allowed to use. Right now in
20:45
this country, if you are a smart
20:47
and talented person who wants to go
20:49
into government, one of the costs of
20:51
that move is that you effectively have
20:54
to go from using the best stuff.
20:56
that anyone with an iPhone or an
20:58
Android phone can use to using this
21:00
more outdated, clunkier, less intuitive set of
21:02
tools. I do not think that should
21:04
be the case. I think that the
21:06
stuff that the public sector is using
21:09
for communication, including a very sensitive things,
21:11
should be as intuitive and easy to
21:13
use and convenient as the stuff that
21:15
the general public uses. Yes, it should
21:17
have additional layers of privacy. Yes, you
21:19
should have to do some kind of,
21:21
you know, procurement process. But a recurring
21:24
theme on this podcast whenever we talk
21:26
about government and tech is that it
21:28
is just way too slow and hard
21:30
to get standard tools approved for use
21:32
in government. So if there's one silver
21:34
lining of the signal gate fiasco, I
21:37
hope it is that our government takes
21:39
access to Good technology products more seriously
21:41
and starts building things and maintaining things
21:43
that are actually competitive with the state
21:45
of the art I'm going to take
21:47
the other side of this one Kevin
21:49
I think if you look at the
21:52
way that the government was able to
21:54
protect their secrets in previous administrations prior
21:56
to the spread of signal they were
21:58
actually able to to prevent high-ranking officials
22:00
from accidentally adding journalists to conversations that
22:02
they shouldn't have been in. There is
22:04
no evidence to me that because of
22:07
the sort of... aging infrastructure of the
22:09
communication systems of government, we were unable
22:11
to achieve some sort of military objective.
22:13
So, you know, even as somebody who
22:15
generally likes technology, I think some of
22:17
these, you know, tech oligarchs have this
22:19
extremely know-at-all attitude that our tech is
22:22
better than your tech, yours sucks, and
22:24
they sort of bluster in, and they
22:26
say, you know, all of your, your
22:28
aging legacy systems, we can just get
22:30
rid of those and move on to
22:32
the next thing. And then you wake
22:35
up after a signal. protocol. It turns
22:37
out it was actually protecting something. Right.
22:39
Like this is the Silicon Valley story
22:41
over and over again is we are
22:43
going to come in and try to
22:45
build everything from first principles. We're going
22:47
to be completely a historical. We're not
22:50
going to learn one less than anyone
22:52
else has ever learned before because we
22:54
think we're smarter than you. And signal
22:56
gate shows us that actually know sometimes
22:58
people have actually learned things and there
23:00
is wisdom to be gleaned from the
23:02
ages Kevin and maybe that should have
23:05
been done here. Well
23:08
Casey the Defense
23:10
Department may be
23:13
in its failing
23:16
era, but AI
23:18
is in its
23:21
scaling era. We'll
23:23
talk to author
23:26
of the scaling
23:28
era for our cache Patel
23:30
when we come back. It's
23:32
how financial firms turn into
23:34
market tracking, client servicing, cyber
23:37
securing, economic engines. With leading
23:39
connectivity and networking, advanced cybersecurity
23:41
and expert partnership, Comcast Business
23:43
is powering the engine of
23:45
modern business, powering possibilities, restrictions
23:47
apply. This podcast is supported
23:50
by Oracle. AI requires a
23:52
lot of compute power and
23:54
the cost for your... AI
23:56
workloads can spiral. That is,
23:58
unless you're running on OCI,
24:01
Oracle Cloud Infrastructure. This was
24:03
the cloud built for AI,
24:05
a blazing fast enterprise-grade platform
24:07
for your infrastructure, database apps,
24:09
and all of your AI
24:11
workloads. Right now, Oracle can
24:14
cut your current cloud bill
24:16
in half if you move
24:18
to OCI. Minimum financial commitment
24:20
and other terms apply. Offer
24:22
ends March 31st. See if
24:24
you qualify at oracle.com/Hard Fork.
24:29
Well, Casey, there are a number of
24:31
people within the clubby and insular world
24:33
of AI who are so well known
24:36
that they go by a single name.
24:38
That's true Madonna share and who else?
24:40
Well, there's Dario Sam Iliya various other
24:42
people and then there's Dworkesh, yes, who
24:45
is not working at an AI company.
24:47
He is an independent journalist, He hosts
24:49
the Dworkesh podcast, which has had a
24:51
number of former hard-for guests on it.
24:54
And he is, I would say, one
24:56
of the best-known media figures in the
24:58
world of AI. Yeah, absolutely. You know,
25:01
Dworkash seemingly came out of nowhere a
25:03
few years back and quickly became... well-respected
25:05
for his highly technical, deeply research interviews
25:07
with some of the leading figures, not
25:10
just in AI, but in also history
25:12
and other disciplines. He is a relentlessly
25:14
curious person, but I think one of
25:17
the reasons why he is so interesting
25:19
to us is on the subject of
25:21
AI, he really has just developed an
25:23
incredible roster of guests and a great
25:26
understanding of the material. Yes, and now
25:28
as of... This week he has a
25:30
new book out which is called The
25:32
Scaling Era, an oral history of AI,
25:35
2019 to 2025, and it is mostly
25:37
excerpts and transcripts from his podcast and
25:39
the interviews that he's done with luminaries
25:42
in AI, but through it he kind
25:44
of assembles the history. of what's been
25:46
happening for the past six or so
25:48
years in AI development, talking to some
25:51
of the scientists and engineers who are
25:53
building it, the CEOs who are making
25:55
decisions about it, and the people who
25:58
are reckoning with what it all means.
26:00
Indeed. So we have a lot to
26:02
ask Dworkesh about, and we're excited to
26:04
get him into the studio today and
26:07
hang out. All right, let's bring in
26:09
Dworkash Patel. I want to start with
26:11
the Duarkesh origin story. You are 24
26:13
years old. You graduated from UT Austin,
26:16
you majored in computer science. I'm sure
26:18
a lot of your classmates and people
26:20
with your interest in tech and AI
26:23
chose the more traditional path of going
26:25
to a tech company starting to work
26:27
on this stuff directly. Presumably that was
26:29
a path that was available to you.
26:32
Why did you decide to start a
26:34
podcast instead? So it was never my
26:36
intention for this to become my career.
26:39
I was doing this podcast basically my
26:41
free time. I was interested in these
26:43
economists and historians and it was just
26:45
cool that I could cold email them
26:48
and get them to come on my
26:50
podcast and then pepper them with questions
26:52
for a few hours. And then when
26:54
I graduated, I didn't really know what
26:57
I wanted to do next. So the
26:59
podcast was almost a gap your experience
27:01
of let me do this. It'll help
27:04
me figure out what kind of startup
27:06
I want to start up I want
27:08
to launch or where I want to
27:10
launch or where I can get hired
27:13
or where I can get hired or
27:15
where I can get hired. Yeah, this
27:17
could actually be a career. This is
27:20
a more fun startup than whatever code
27:22
monkey, you know, third setting in Android
27:24
kind of. So, basically just kept it
27:26
up and it's grown ever since and
27:29
it's been a fun time. Yeah, I
27:31
mean, I'm curious how you describe what
27:33
you do. Do you consider yourself a
27:35
journalist? I guess so. I don't know
27:38
if there's a good word. I mean,
27:40
there's like journalists, there's content creator, there's
27:42
content creator, there's content creator, there's... Blogger,
27:45
broadcaster, sure, journalist, yes. Humanitarian. I asked
27:47
because I started listening to your podcast
27:49
a while ago back when it was
27:51
called The Lunar Society and the thing
27:54
that I noticed right away was that
27:56
you were not doing a ton of
27:58
explanation and translation. Like I often think
28:01
of our job as journalists as one
28:03
primarily of translation of taking things that
28:05
insiders and experts are talking about and
28:07
like making them legible to. a broader
28:10
and less specialized audience. But your podcast
28:12
was so interesting to me because you
28:14
weren't really doing that. You were kind
28:16
of not afraid to stay in the
28:19
sort of wonky insider zone. You were
28:21
having conversations with these very technical experts
28:23
in their native language, even if it
28:26
got pretty insider and wonky at times.
28:28
Was there a theory behind that choice?
28:30
No, honestly, it never occurred to me,
28:32
because nobody was listening in the beginning,
28:35
right? I think it was a bad
28:37
use of my guest time to have
28:39
said yes in the first place, but
28:42
now that they've said yes, like, let's
28:44
just have fun with this, right? Like,
28:46
who is listening to this? It's me,
28:48
it's for me. And then what I
28:51
realize is that people appreciated that style
28:53
of, because with a lot of these
28:55
people, they've done so many interviews with
28:57
this person. And if you're a dinner
29:00
with them, you'd... just asked them about
29:02
your main crux is like here's why
29:04
I here's you know what's going on
29:07
here here's why I disagree with you
29:09
you tease them about like their big
29:11
ideas or something but initially it was
29:13
just an accident I think in mainstream
29:16
media we are terrified that you might
29:18
read something we write or listen to
29:20
something we do and not understand a
29:23
word of it because there's always an
29:25
assumption that that is the moment that
29:27
you will stop reading. I think what
29:29
you've discovered with your podcast is that
29:32
that's actually a moment you'll stop reading.
29:34
I think what you've discovered with your
29:36
podcast is that that's actually a moment
29:38
that causes people to lean in. That's
29:41
actually a moment that causes people to
29:43
lean with your podcast is that that's
29:45
actually a moment that causes people. So
29:48
you've got this new. out the scaling
29:50
era, basically a sort of oral history
29:52
of the past six or so years
29:54
of AI development. Tell us about the
29:57
book. So I have been doing these
29:59
interviews with the key people thinking about
30:01
AI over the last two years. You
30:04
know, CEOs like Mark Zuckerberg and Demis
30:06
Asabas and Darrio Amadee, researchers at a
30:08
deeply technical level, economists who are thinking
30:10
about what will the deployment of these
30:13
technologies be like? philosophers who are talking
30:15
about these essential questions about AI ethics
30:17
and how do we, how will we
30:19
align systems that are, you know, millions
30:22
of times more powerful or more, at
30:24
least more plentiful? And these are some
30:26
of the most narrowly difficult questions that
30:29
humanity has ever faced. Like, what is
30:31
the true nature of intelligence, right? Or
30:33
what will happen when we have millions
30:35
of intelligent machines that are running around
30:38
in the world? Is the idea of
30:40
superhuman intelligence even a coherent concept? Like
30:42
what exactly does that mean? What exactly
30:45
will it take to get there obviously?
30:47
So all of it was such a
30:49
cool experience to just see all of
30:51
that organized in this way we would
30:54
have annotations and definitions and just beautiful
30:56
graphs. My co-author Gavin Leach and our
30:58
editor Rebecca Haskad and the whole team
31:00
just did a wonderful job making this
31:03
really beautiful artifact. So. That's a book.
31:05
I also really liked the way that
31:07
the book sort of slows down and
31:10
explains some of these basic concepts, footnotes,
31:12
the relevant research. Like you really do,
31:14
it is more accessible than I would
31:16
say the average episode of the Duarkesh
31:19
podcast in the sense that you can
31:21
really start from, like I would feel
31:23
comfortable giving this to someone as a
31:26
gift who doesn't know a ton about
31:28
AI and sort of saying like, this
31:30
is sort of a good primer to
31:32
what's been happening for the past few
31:35
years in this world. It won't treat
31:37
you like an idiot. Like a lot
31:39
of these other AI books are just
31:41
about this, oh, big picture, how will
31:44
society be changed? And it's like, no,
31:46
to understand AI, you need to know,
31:48
like, what is actually happening with the
31:51
models, what is actually happening with the
31:53
hardware, what is actually happening in terms
31:55
of like actual investments and CAPX and
31:57
whatever. And we'll get into that. But
32:00
also, because of this enhancement with the
32:02
notes and definitions and annotations, we still.
32:04
It's written for a smart college roommate
32:07
in a different field. One question that
32:09
you asked at least a couple people
32:11
in your book, some version of, was
32:13
basically what's their best guess at why
32:16
scaling works? Why pouring more compute and
32:18
more data into these models tends to
32:20
yield something like intelligence? I'm curious what
32:22
your answer for that is. What's your
32:25
current best guess of why scaling works?
32:28
I honestly don't think there's a good
32:30
answer anybody has. The best one I've
32:32
heard is this idea that intelligence is
32:34
just this hodgepodge of different kinds of
32:37
circuits and programs. And this is so
32:39
hand-wavy, and I acknowledge this hand-wavy, but
32:41
you got to come up with some
32:43
answer. And that fundamentally what intelligence is
32:46
is this pattern matching thing, this ability
32:48
to see how different ideas connect and
32:50
so forth. And as you make this
32:52
bucket bigger. You can start off with
32:55
noticing, does this look like a cat
32:57
or not, and then you get to
32:59
higher and higher levels of abstraction, like
33:01
what is the structure of time, and
33:04
the so-called ether, and the speed of
33:06
light, and so forth. Again, so hand-wavy,
33:08
but I think it will be this.
33:10
It's so hand-wavy, it will just be
33:13
this. Again, it's so hand-wavy, it will
33:15
just be this. It will be this.
33:17
It's so hand-wavy. But I think about
33:20
it. But I think about it. But
33:22
I think about it. Yeah, I mean
33:24
there seems to be this sort of
33:26
philosophical divide among the AGI believers and
33:29
the AGI skeptics over the question of
33:31
whether there is something other than just
33:33
materialism in intelligence, whether it is just
33:35
like... Intelligence is just a function of
33:38
having the right number of neurons and
33:40
synapses firing at the right times and
33:42
sort of pattern matching and doing next
33:44
token prediction. I'm thinking of this like
33:47
famous Sam Altman tweet where he posted
33:49
I am a stochastic parrot and so
33:51
are you basically sort of dealing with
33:53
rebutting the sort of common attack on
33:56
large language models which was that they
33:58
were just stochastic. parents, they're just learning
34:00
to regurgitate their training data and predict
34:02
the next token. And among a lot
34:05
of the sort of AGI true believers
34:07
that I know, there is this feeling
34:09
that we are just essentially doing what
34:11
these language models are doing in predicting
34:14
the next tokens or synthesizing things that
34:16
we've heard from other places and regurgitating
34:18
them. That's a hard pill for a
34:20
lot of people to swallow, including me,
34:23
like I'm not quite... a full materialist.
34:25
Are you? Like, do you believe that
34:27
there's something about intelligence that is not
34:29
just raw processing power and data
34:32
and pattern matching? I don't. I mean,
34:34
it's hard for me to think about what
34:36
that would be. There's obviously religious
34:38
ideas about, there's maybe a soul
34:40
or something like that, but separate
34:43
from that, something we could sort of
34:45
have a debate about or analyze. Yeah,
34:47
actually I'm curious about, like, what kind
34:49
of thing could it be. Ethics?
34:51
I don't know, like that
34:54
sounds very fuzzy and non-scientific,
34:56
but like, I do think there
34:58
is something essential about intelligence
35:00
and being situationally
35:03
intelligent that requires like
35:05
something outside of your immediate
35:07
experience, like knowing what is right
35:10
and what is wrong. Well, I
35:12
think one reason why this question
35:14
might be a bit challenging is
35:16
that there are still many areas
35:18
where the AI we have to
35:20
date is just less than human
35:22
in its quality level, right? Like
35:24
these machines don't really have common
35:26
sense. Their memories are not great.
35:28
They don't seem to be great
35:30
at acquiring new skills, right? If
35:32
it's not in the training data,
35:34
sometimes it's hard for them to
35:36
get there. And so it does
35:38
raise the question, well, is the
35:40
kind of categorically different than whatever
35:43
this other kind of intelligence is
35:45
that we're inventing. Yeah, that's right. The on
35:47
the ethics thing, I think it's notable that if
35:49
you talk to GPD4, it has a sense of ethics.
35:51
If you talk to Claude, it has a sense of
35:53
ethics. It will tell you, you talk about like,
35:55
what do you think about animal ethics? What
35:57
do you think about this kind of moral?
35:59
it like it has a I mean
36:02
I'm not sure what you mean by
36:04
a sense of ethics in fact the
36:06
worry is that it might have too
36:09
strong a sense of ethics right and
36:11
by there I'm referring to maybe its
36:13
ethics becomes like I want more paper
36:16
clips or or I mean sorry on
36:18
a more serious note but those ethics
36:20
are given to it in part by
36:22
the process of training and fine-tuning the
36:25
model or making it obey some constitution
36:27
like where do you think you get
36:29
your ethics? Who trained you? Yeah I
36:32
mean it is notable that we most
36:34
people in a given society shared the
36:36
basic world do that like you and
36:39
I agree on 99% of things and
36:41
we would probably agree on like 50%
36:43
of things with somebody in the year
36:46
1500 and the reason we agree on
36:48
so much has to do with our
36:50
training distribution which is this real you
36:53
know the society we live in yeah
36:55
yeah so I mean maybe this argument
36:57
that there is something more to intelligence
37:00
than just brute force computation is somewhat
37:02
romantic Yes, I was trying to figure
37:04
out a more sophisticated way of saying
37:06
cope, but do you think that is
37:09
cope? Do you think that the people
37:11
who are sort of skeptical of the
37:13
possibility of AGI because they believe that
37:16
computers lack something essential that humans have
37:18
is just a response to not being
37:20
able to cope with the possibility that
37:23
computers could replace them? I think there's
37:25
two different questions. One is, is it
37:27
cope to say that we won't get
37:30
AGI? in the next two years or
37:32
three years or whatever short timelines that
37:34
some people in San Francisco, some of
37:37
our friends seem to have. I don't
37:39
think that's cope. I think there's actually
37:41
a lot of reasonable arguments one can
37:43
make about why it will take a
37:46
longer period of time. Maybe it'll be
37:48
five years, ten years. Maybe this ability,
37:50
as you were saying, keep, and keep
37:53
this idea that will never get there
37:55
is cope because... There's always this argument
37:57
about the God of the gaps of
38:00
the intelligence of the gas. The thing
38:02
it can't do is a thing that
38:04
is fundamentally human. One notable thing, Aristotle
38:07
had this idea that what makes us
38:09
human is fundamentally our ability to reason.
38:11
And reasoning is the first thing these
38:14
models have learned to do. Like they're
38:16
not that useful at most things, except
38:18
for raw reasoning. Whereas the things we
38:21
think of just as pure reptile brain,
38:23
of having this understanding of the physical
38:25
world as they're moving about it or
38:27
something, that is the thing that these
38:30
models struggle with. So we'll have to
38:32
think about what is the archetypical human...
38:34
skill set as these models advance. That's
38:37
fascinating. I never actually, that never actually
38:39
occurred to me. I think it speaks
38:41
a lot to why people find them
38:44
so powerful in this sort of like
38:46
therapist, mentor, coach role, right, is that
38:48
those figures that we bring into our
38:51
lives are often just there to help
38:53
us reason through something. And these models
38:55
aren't increasingly very good at it. Yeah.
38:58
Yeah. In your conversations with all these
39:00
AI researchers and industry leaders. Are there
39:02
any blind spots that you feel they
39:04
have consistently or places where they are
39:07
not paying enough attention to the consequences
39:09
of developing AI? I think they do
39:11
not, with a few notable exceptions, they
39:14
don't have a concrete sense of what
39:16
things going well looks like and what
39:18
stands in the way. If you just
39:21
ask them what the year 2040 looks
39:23
like, they'll say things like, oh, we'll
39:25
cure these diseases. But what is our
39:28
relationship to billions of advanced intelligences? How
39:30
do we do redistribution such that the...
39:32
I mean it's not your or my
39:35
fault that we'll be out of a
39:37
job, right? There's no in principle reason
39:39
why everybody couldn't be better off, but
39:42
there shouldn't be the zero something where
39:44
we shouldn't make sure the AIs don't
39:46
take over, and we should also make
39:48
sure we don't treat them terribly. Something
39:51
else that's been on my mind recently
39:53
that you're sort of getting at is
39:55
or that maybe you're getting out with
39:58
your question Kevin is how seriously do
40:00
the big tech companies take the prospect
40:02
of AGI arriving. Because on one hand,
40:05
they'll tell you. We're the leading frontier
40:07
labs, we're publishing some of the best
40:09
research, we're making some of the best
40:12
products, and yet it seems like none
40:14
of them are really reckoning with any
40:16
of the questions that you just raised.
40:19
It sort of makes sense, even saying
40:21
some of the stuff that you just
40:23
said right now, which seems quite reasonable
40:25
to me, would sound weird if Sasha
40:28
Nadella were talking about it on an
40:30
earnings call, right? And yet at the
40:32
same time, I just wonder. But like
40:35
on some level it's weird to me,
40:37
you know, somebody recently was talking to
40:39
me about Google and was sort of
40:42
saying if you look at what Google
40:44
is shipping right now, it doesn't seem
40:46
like they think that very powerful intelligence
40:49
is going to arrive any time soon.
40:51
What they're taking seriously is the prospect
40:53
that CHI BT will replace Google and
40:56
search. And that maybe if you actually
40:58
did take AGI seriously, you would have
41:00
a very different approach to what you
41:02
were doing. So as somebody who was
41:05
like talking to the CEOs of... these
41:07
companies, I'm curious, how do you rate
41:09
how seriously they're actually taking AGI? I
41:12
think almost none of them are AGI-pilled.
41:14
Like they might say the word AGI,
41:16
but if you just ask them, what
41:19
does it mean to have a world
41:21
with like actually automated intelligence? There's a
41:23
couple of immediate implications. So right now,
41:26
these companies are competing with each other
41:28
for our market share or in chat.
41:30
If you had a fully autonomous worker,
41:33
even a remote worker, that's... were tens
41:35
of trillions of dollars, that's worth way
41:37
more than a chat bot, right? So
41:40
you'd be much more interested in deploying
41:42
that kind of capability. I don't know
41:44
if API is the right way, maybe
41:46
it's like a virtual machine or something.
41:49
I'd just be much more interested in
41:51
developing the UI, the guardrails, whatever, to
41:53
make that work, then trying to get
41:56
more people to use my chat app.
41:58
And then I also think compute would
42:00
just be this huge bottleneck, if you
42:03
really believe. 80 per capita is like
42:05
$70,000 or something. So I would just
42:07
be interested in getting as much compute
42:10
as possible to have it ready to
42:12
deploy once the AIs are powerful enough.
42:14
One of the things I really enjoyed
42:17
about your book is getting a sense
42:19
not just of what the people you've
42:21
interviewed think about A.I. and A.G.I. and
42:23
scaling, but what you believe. And I
42:26
have to say I was surprised at
42:28
the end of the book you said
42:30
that you believe. AI is more likely
42:33
than not to be net beneficial for
42:35
humanity. And I was surprised because a
42:37
lot of the people you talk to
42:40
have quite high P dooms, they're quite
42:42
worried about the way AI is going,
42:44
that seems not to have spread to
42:47
you, like you seem to be much
42:49
more optimistic than some of your guests.
42:51
So is that just a quirk of
42:54
your personality or why are you more
42:56
optimistic than the people you interview? So
42:58
if you have a P doom of
43:01
10% or 20% that is first of
43:03
all unacceptable. The idea that everything you
43:05
care about, everybody you care about, could
43:07
in some way be extinguished, disempowered, so
43:10
forth. That is just an incredibly high
43:12
number. Just like let's say nuclear weapons
43:14
is like a doom scenario. If you're
43:17
like, I should I go over the
43:19
war with this country and there's a
43:21
20% chance that there's no humans around,
43:24
you should not take that bet. But
43:26
it's harder to maybe express the kinds
43:28
of improvements which are... This will sound
43:31
very utopian, but we do have peak
43:33
experiences in our life. We know that,
43:35
or we have people we really care
43:38
about, but we know how beautiful life
43:40
can be, how much connection there can
43:42
be, how much joy we can get
43:44
out of, whether it's learning or curiosity,
43:47
or other kinds of things. And there
43:49
can just be many more people, us,
43:51
digital, whatever, who can experience it. And
43:54
there's another way to think about this,
43:56
because it's fundamentally impossible to know what
43:58
the future holds. But one intuition here
44:01
is. I gave you the choice. I'll
44:03
send you back to the year 1500.
44:05
Tell me the amount of money I
44:08
would have to give you, but you
44:10
can only use that money in the
44:12
year 1500, such that would be worth.
44:15
it for you to go back to
44:17
the year 1500? I think it's quite
44:19
plausible the answer is there's no amount
44:22
of money I'd rather have in the
44:24
year 1500 than just be alive right
44:26
now with my normal standard of living.
44:28
And I think, I hope, will have
44:31
a similar relationship with the future. What
44:33
is your post AGI plan? Like, do
44:35
you think that you will... be podcasting.
44:38
Will you still hang out with us?
44:40
It's funny because we have our, I
44:42
mean we have our post AGI careers
44:45
already, right? Even after the AGI comes,
44:47
they might automate everybody else in this
44:49
office, but you and I will just
44:52
get in front of a camera and
44:54
there will still be value in sort
44:56
of like having a personality, being able
44:59
to talk, explain, being somebody that people
45:01
relate to on a human level. That's
45:03
right. I think so. I am curious
45:05
though, because a thing that I know
45:08
about you from our brief interactions and
45:10
just you know reading things that have
45:12
been written about you is that you
45:15
believe in learning broadly you have been
45:17
described as a person who's being on
45:19
a quest to learn everything. I think
45:22
a lot of... Casey's on a quest
45:24
to learn nothing. I'm a quest to
45:26
learn what I need to learn. Just
45:29
in time manufacturers. Yes. I think a
45:31
lot of people right now, especially students
45:33
and younger people, are questioning the value
45:36
of accumulating knowledge. We all have these
45:38
pocket oracles now that we can consult
45:40
on basically anything. And sometimes I think
45:43
I was at a school last week
45:45
talking with some college students and one
45:47
of them basically said they felt like
45:49
they were a little bit like the
45:52
taxi drivers in London who still had
45:54
to like memorize all the streets even
45:56
after Google Maps was invented and that
45:59
was sort of like obsolete like they
46:01
felt like they were just sort of
46:03
doing it for the sake of doing
46:06
it. broad knowledge accumulation is in an
46:08
age of powerful AI. The thing I
46:10
would say to somebody who is incredibly
46:13
dismayed is like, why am I going
46:15
to college? Why is any of this
46:17
worth it is, if you believe AGI...
46:20
ASI is going to be here two
46:22
years, that's fine. I don't think that's
46:24
particularly likely. And if it is, what
46:26
are you going to do about anyways?
46:29
So why might as well focus on
46:31
the other worlds? And in the other
46:33
worlds, it's going to happen before the
46:36
fully automated robot that's automating the entire
46:38
robot that's automating the entire economy, is
46:40
these models will be able to help
46:43
you at certain kinds of tasks, on
46:45
the future. And the kinds of things
46:47
that you will be in a good
46:50
position to do is if you have
46:52
deep understanding of a particular industry, the
46:54
relevant problems in it, and it's hard
46:57
to give advice in the abstract like
46:59
this because I don't know about these
47:01
industries, so you'll have to figure it
47:04
out, but this is... probably the time
47:06
to be the most ambitious, to have
47:08
the most amount of agency, to actually,
47:10
these models currently aren't really good at
47:13
actually doing things in the real world,
47:15
or even the visual world. If you
47:17
can do that and use these as
47:20
leverage, this is probably the most exciting
47:22
time to be around. Here's my answer
47:24
for that. You don't want to be
47:27
in a world where you just have
47:29
to ask ChatGPT everything. Do you know
47:31
what I mean? Like, there's a lot
47:34
of effort involved just sitting down, writing
47:36
the prompt, reading the report that comes
47:38
out of it, internalizing it, sent this,
47:41
relating it, like, you'd be better off
47:43
actually just getting an education and then
47:45
checking in with the chatbot for the
47:47
things that chatbot is good at, at
47:50
least for, you know, I don't know,
47:52
next few years. Yeah, I don't know.
47:54
I believe that and I want to
47:57
believe that the thing I've spent my
47:59
life doing is like. Learning is fun.
48:01
And if you can just do it
48:04
for your own enjoyment, like I don't
48:06
think learning the streets of London is
48:08
that fun, but I think learning broadly
48:11
about the world is fun. And so
48:13
you should do it if it's exciting
48:15
and fun too. Absolutely. I think that's
48:18
totally correct. I also, if I'm like
48:20
actually talking to a younger version of
48:22
myself. It would be six years old
48:24
to be clear. Who's a young man
48:27
we're talking to today? Hey, little buddy.
48:29
Just advice on careers in general is
48:31
so bad and for, especially with how
48:34
much the world's gonna be changing, it's
48:36
gonna get even worse. And so, I
48:38
mean, who would have told me, what
48:41
kind of reasonable person would have told
48:43
me four years ago? Man, this computer
48:45
science stuff, just stop that, focus more
48:48
time on the podcast, right? So. Yeah,
48:50
it's going to change a lot, I
48:52
think. But see, that's not helpful. Like,
48:55
what are you going to do with
48:57
this idea that, like, all advice is
48:59
wrong? It's even an even a worse
49:02
position. Just this idea that, like, yeah,
49:04
be a little bit skeptical of advice
49:06
in general, really trust your own intuition,
49:08
your own interest, don't be delusional about
49:11
things, but, yeah, explore, try to get
49:13
a better handle on the world and
49:15
do more things, and run more experiments.
49:18
Then just this is the thing that's
49:20
going to be high leverage in AI
49:22
and that's where I'm going to do
49:25
this based on this first principles argument
49:27
Yeah, I think run more experiments is
49:29
just really great underused advice Is that
49:32
why you built a meth lab in
49:34
your house? Yeah, it's going great for
49:36
me. Bob me that hot tub This
49:39
is great. Thank you so much for
49:41
our cash. This is fun. Thanks for
49:43
having me on guys Well Kevin, when
49:45
we come back, we ask listeners whether
49:48
they thought AI might be affecting their
49:50
critical thinking skills. It's time to reveal
49:52
what they all told us. Advanced solutions
49:55
from Comcast business help turn today's enterprises
49:57
into engines of modern business. It's how
49:59
financial firms turn into market tracking, client
50:02
servicing, cyber securing economic engines. With leading
50:04
connectivity and networking, advanced cybersecurity and expert
50:06
partnership, Comcast business is powering the engine
50:09
of modern business. Powering possibilities. Restrictions apply.
50:11
This podcast is supported by Delete Me.
50:13
Protecting yourself against fraud, harassment, and identity
50:16
theft is something everyone needs to think
50:18
about. Data brokers bypass online safety measures
50:20
to sell your name, address, and social
50:23
security number to scammers. Delete me scours
50:25
the web to find and remove your
50:27
information before it gets into the wrong
50:29
hands. With over 100 million personal listings
50:32
removed, delete me as your trusted privacy
50:34
solution for online safety. Get 20% off
50:36
your delete me plan when you text
50:39
fork to 64,000. Text fork to 64,000.
50:41
Message and data rate supply. Well
50:45
Casey, a couple of weeks ago, we
50:47
talked about a study that had come
50:49
out from researchers at Carnegie Mellon and
50:52
Microsoft about AI and its effects on
50:54
critical thinking. That's right. And we wanted
50:56
to know. how our listeners felt about
50:58
how AI was affecting their critical thinking.
51:00
And so we asked people to send
51:03
in their emails and voicemails. Yeah, and
51:05
we got so many responses to this.
51:07
I mean, almost a hundred responses from
51:09
our listeners that reflected kind of the
51:12
more qualitative side of this of how
51:14
people actually feel like AI is impacting
51:16
their ability to think and think deeply.
51:18
Yeah, and look, there may be a
51:20
bit of a selection effect in here.
51:23
I think if you think AI is
51:25
bad and destroying your brain and don't
51:27
touch the stuff, you probably are not.
51:29
sending us a voicemail. But at the
51:31
same time, I do think that these
51:34
responses show kind of the range of
51:36
experiences that people are happening. And so
51:38
yeah, we should dive in and find
51:40
out what our listeners are feeling. Okay,
51:43
so first up, we're going to hear
51:45
from some listeners who felt strongly that
51:47
AI was not making them dumber or
51:49
worse at critical thinking, who believed that
51:51
it is enhancing their ability to engage
51:54
critically with new material and new subjects.
51:56
So let's play one from a perspective.
51:58
that we haven't really engaged with a
52:00
lot on this show so far, which
52:03
is People of the Cloth. My name
52:05
is Nathan Bourne and I'm an Episcopal
52:07
priest. A big part of my work
52:09
is putting things in conversation with one
52:11
another. I'm constantly finding stories, news articles,
52:14
chapters of books, little bits of story
52:16
that people have shared with me, and
52:18
interpreting them alongside scripture. I've long struggled
52:20
to find a good system to keep
52:22
track of all those little bits I've
52:25
found. Or the last year I've turned
52:27
to AI to help. I've used the
52:29
Readwise app to better store, index, index,
52:31
and query, pieces that I've saved. I've
52:34
also used clawed to help me find
52:36
material that I would never encounter otherwise.
52:38
These tools have expanded my ability to
52:40
find and access relevant material that's helped
52:42
me think more deeply about what I'll
52:45
preach and in less time than I
52:47
used to spend sifting through Google results
52:49
and the recesses of my own hazy
52:51
memory. Wow, I love this one. This
52:54
one was particularly fascinating to me because
52:56
I've spent some time working on religion-related
52:58
projects. I wrote a book about going
53:00
to Christian College many years ago, and
53:02
I spent a lot of time in
53:05
church services over the years. And so
53:07
much of what the church services that
53:09
I've been in have done has tried
53:11
to find a modern spin or a
53:13
modern take or some modern insights on
53:16
this very old book, the Bible. And
53:18
I can imagine AI being very useful
53:20
for that. Oh yeah, absolutely. I mean,
53:22
this feels like a case where Nathan
53:25
is almost setting aside the question of
53:27
AI and critical thinking and just focusing
53:29
on ways that AI make his researching
53:31
and writing that he has to do
53:33
every week much easier, right? Like, these
53:36
are just very good solid uses of
53:38
the technology as it exists, and they're
53:40
still leaving plenty of room to bring
53:42
his own human perspective to the work,
53:45
which I really appreciate. And, you know,
53:47
of course, always love to hear about.
53:49
man of the cloth sort of clasping
53:51
his hands together and saying, Claude, help
53:53
me. All right, let's hear the next
53:56
one. This is from a software engineer
53:58
named Jessica Mock who told us about
54:00
how she's taking a restrained approach to
54:02
asking AI for help with coding. When
54:04
I was being trained, my mentor told
54:07
me that I should avoid using auto-complete
54:09
and he said that was because I
54:11
needed to train my brain to actually
54:13
learn the coding and I took that
54:16
to heart. I do that now with
54:18
AI. I do use copilot, but I
54:20
use it. for floating theories, asking about
54:22
things that I don't know. But if
54:24
it's something that I know how to
54:27
do, I put it in myself, and
54:29
then I ask Co-Pilot for a code
54:31
review. And I found that to be
54:33
pretty effective. My favorite use of Co-Pilot,
54:36
though, is what does this error mean
54:38
when I'm debugging? I love asking that,
54:40
because you get more context into what's
54:42
happening, and then I start to understand
54:44
what's actually going on. Is it making
54:47
me dumber? I don't think so. I
54:49
think it's making me learn a lot.
54:51
I'm jumping into languages that I was
54:53
never trained in, and I'm trying things
54:55
that I normally would have a shide
54:58
away from. So I think it really
55:00
depends on how you use it. So
55:02
I love this one. If you talk
55:04
to software engineers about how they solve
55:07
problems, a lot of what they'll do
55:09
is just ask a senior software engineer.
55:11
And that creates a lot of roadblocks
55:13
for people, because that senior software engineer
55:15
might be busy doing something else. Or
55:18
maybe you just feel a little bit
55:20
shy about asking them 15 questions a
55:22
day. What Jessica is describing is way
55:24
where she just kind of doesn't have
55:27
to do that anymore. She can just
55:29
ask the tool, which is infinitely patient,
55:31
has a really broad range of knowledge,
55:33
and along the way she feels like
55:35
she is leveling up from a more
55:38
junior developer to a senior one. That's
55:40
pretty cool. Yeah, I like this one.
55:42
I think it also speaks to something
55:44
that I have found during my vibe
55:47
coding experiments with AI, is that it
55:49
does actually make me want to learn
55:51
how to learn how to code to
55:53
build. and will become increasingly unnecessary, there
55:55
is sort of just this like intellectual
55:58
kick in the pants where it's like,
56:00
you know, if you just like applied
56:02
yourself for a few weeks, you could
56:04
probably learn a little bit of Python
56:06
and start to understand some of what
56:09
the AI is actually doing here. Absolutely,
56:11
you know what makes me reliably want
56:13
to finish a video game? It's getting
56:15
a little bit good at a video
56:18
game, right? If I'm starting out and
56:20
I can't like sort of figure out
56:22
how to time my shoes, I'll throw
56:24
it away. But that moment where you're
56:26
like, oh, I get this a little
56:29
bit, it unlocks, it unlocks, it unlocks
56:31
this whole world of curiosity, it, it
56:33
unlocks this whole world of curiosity. Try
56:35
again. Right. Wait, what is error 642?
56:38
And it turns out all that information
56:40
was on the internet and AI has
56:42
now made that accessible to us and
56:44
helps us understand. So if nothing else,
56:46
AI has been good for that. Yeah.
56:49
This next one comes to us from
56:51
a listener named Gary. He's from St.
56:53
Paul, Minnesota, which is one of the
56:55
Twin Cities, Kevin, along with Minneapolis. And
56:57
it points to the importance of considering
57:00
different learning challenges or disabilities when considering
57:02
this question of AI's impact on critical
57:04
thinking, let's hear Gary. I'm a 62-year-old
57:06
marketing guy who does a lot of
57:09
writing. I'm always trying to get new
57:11
ideas. keep track of random thoughts. And
57:13
I also have ADHD, so I get
57:15
a ton of ideas, but I also
57:17
get a ton of distractions, to be
57:20
honest. And so what I've found with
57:22
AI is I get to have a
57:24
thought partner, if you will, who can
57:26
help me just download all of these
57:29
different ideas that I've got. And, you
57:31
know, if I need to follow a
57:33
thread, I can follow a thread by
57:35
asking more questions. But at the end
57:37
of one of these brainstorming, I can
57:40
say just recap everything that we came
57:42
up with, give it to me in
57:44
a list, and all of a sudden
57:46
my productivity just gets massively improved because
57:48
I don't have to go back and
57:51
sort through all of these different notes,
57:53
all of these different things I've jotted
57:55
down all over and you know can
57:57
sort through what's real and what isn't
58:00
real. So it has been super helpful
58:02
to me in that way. Kevin what
58:04
do you make of this one? Yeah
58:06
I like this one because I think
58:08
that one of the things that AI
58:11
is really good for is people with
58:13
not just like challenges or disabilities with
58:15
learning, but just different learning styles. One
58:17
of the most impressive early uses of
58:20
ChatGBT that I remember hearing about was
58:22
the use in the classroom to sort
58:24
of tailor a lesson to a visual
58:26
learner or an auditory learner or just
58:28
someone who processes information through metaphors and
58:31
comparisons. It is so good at doing
58:33
that kind of work of making something
58:35
accessible and personalized to the exact way
58:37
that someone wants to learn something. And
58:39
you know, and I imagine that Gary
58:42
may be doing this already, but the
58:44
sort of use cases that he's describing
58:46
seem like they would be great for
58:48
somebody who wants to use one of
58:51
these voice mode technologies. I'm somebody who's
58:53
most comfortable on a keyboard, but there
58:55
are so many people that just love
58:57
to record notes to self, and there
58:59
are now a number of AI tools
59:02
that can help you organize those and
59:04
sort of turn them into really useful
59:06
documents. And so if you're the sort
59:08
of person that kind of just wants
59:11
to let your mind wander talk into
59:13
your phone for a few minutes, and
59:15
then give to the AI the job
59:17
of making it all make sense. We
59:19
have that now, and that is kind
59:22
of crazy and cool. Yep. Yeah. All
59:24
right. Let's do one more in this
59:26
camp of people who don't think that
59:28
AI is making them dumber or worse
59:30
at critical thinking. My name is Anna,
59:33
and I live in a suburb of
59:35
Chicago. I wanted to share a recent
59:37
experience I had with AI and how
59:39
it made me think harder about solving
59:42
a problem. I'm self-employed and don't have
59:44
the benefit of a team to help
59:46
me if I get stuck on something.
59:48
I was using an app called Airtable,
59:50
which is a database product. I consider
59:53
myself an advanced user, but not an
59:55
expert. I was trying to set up
59:57
something relatively complex, couldn't figure it out,
59:59
and couldn't find an answer in airtable
1:00:02
forums. Finally, I asked chat GPT. I
1:00:04
explained what I was trying to do
1:00:06
in a lot of detail and asked
1:00:08
chat GPT to tell me how I
1:00:10
should configure airtable to get what I
1:00:13
was looking for. Chat GPT gave me
1:00:15
step-by-step instructions, but they were incorrect. I
1:00:17
prompted chat GPT again and said, airtable
1:00:19
doesn't work that way. And Chatgy BT
1:00:21
replied, you're right. Here are some additional
1:00:24
steps you should take. The resulting instructions
1:00:26
were also incorrect, but they were enough
1:00:28
to give me an idea, and my
1:00:30
idea worked. In this example, the back
1:00:33
and forth of Chatgy BT was enough
1:00:35
to help me stretch the skills I
1:00:37
already had into a new use case.
1:00:39
I love this one because I think
1:00:41
what made... AI helpful to Anna in
1:00:44
this case is not that she used
1:00:46
it and immediately gave her good information,
1:00:48
is that she knew enough about it
1:00:50
to know that it was unreliable and
1:00:53
so to do her own deeper dive
1:00:55
based on her experience that she wasn't
1:00:57
getting good information from the AI. My
1:00:59
worry is that people who aren't Anna,
1:01:01
who aren't sort of deeply thinking about
1:01:04
these things, will just kind of blindly
1:01:06
go with whatever the AI tells them
1:01:08
and then if it doesn't work, they'll
1:01:10
just kind of give up. I think
1:01:12
it really is a credit to her
1:01:15
that she kept going and kept figuring
1:01:17
out what is the real solution to
1:01:19
this problem. It is a risk, but
1:01:21
let me just say, like, and this
1:01:24
is just kind of a free tip
1:01:26
for your life, if you were someone
1:01:28
who struggles with using software, I increasingly
1:01:30
believe that one of the best uses
1:01:32
of chat bots is just asking them
1:01:35
to explain to you how to use
1:01:37
software. I recently got a PC laptop
1:01:39
and like everything is different than I've
1:01:41
been used to for the past 20
1:01:44
years of using a computer, but and
1:01:46
I press it and I can say,
1:01:48
how do I connect an Xbox controller
1:01:50
to this thing? And it told me
1:01:52
in 10 seconds, save me a lot
1:01:55
of Google. So anyway, Anna, you're on
1:01:57
to something here. It said, get a
1:01:59
life. It actually did say that. I
1:02:01
was offended. Shame on you. Copilot. All
1:02:03
right. Now let's hear from some listeners,
1:02:06
Kevin, who are more skeptical about the
1:02:08
way AI might be affecting their own
1:02:10
cognitive abilities, or maybe their student's ability
1:02:12
to get their work done. For this
1:02:15
next one, I want to talk about
1:02:17
an email we got from a Professor
1:02:19
Andrew Fano who conducted an experiment in
1:02:21
a class he teaches for NBA students
1:02:23
at Northwestern. Northwestern, of course, my alma
1:02:26
mater, Go Wildcats, and that is why
1:02:28
we selected this one. And Andrew sent
1:02:30
us a sort of longer story about
1:02:32
a class. that he was teaching. And
1:02:35
the important thing to know about this
1:02:37
class is that he had divided the
1:02:39
students into two groups. One could use
1:02:41
computers, which meant also using large language
1:02:43
models, and another group of students who
1:02:46
could not. And then he had them
1:02:48
present their findings. And when the computer
1:02:50
group presented, he told us that they
1:02:52
had sort of much more creative ideas,
1:02:54
more outside the box, and that those
1:02:57
solutions involved listing many of the items
1:02:59
that the LMs had proposed for. them.
1:03:01
And one of the reasons that Andrew
1:03:03
thought that was interesting was that many
1:03:06
of the ideas that they presented were
1:03:08
ones that had actually been considered and
1:03:10
rejected by the people who were not
1:03:12
using the computers because they found those
1:03:14
ideas to be sort of too outlandish.
1:03:17
And so the observation that Andrew made
1:03:19
about all of this was that the
1:03:21
computer using group saw these AI generated
1:03:23
ideas as something that they could present
1:03:26
without them reflecting negatively on themselves because
1:03:28
they weren't their ideas. These were the
1:03:30
computers ideas. And so it was like
1:03:32
the LLLMs were giving them permission to
1:03:34
suggest things that might otherwise seem embarrassing
1:03:37
or ridiculous. So what do you make
1:03:39
of that? That's interesting. I mean, I
1:03:41
usually think of AI as being kind
1:03:43
of a flattener of creative ideas because
1:03:45
it is just sort of trying to
1:03:48
give you like the most, you know,
1:03:50
predictable outputs. But I like this angle
1:03:52
where it's like actually, you know, giving
1:03:54
you. the permission to be a little
1:03:57
weird. Because you can just say, if
1:03:59
someone hates the idea, you can just
1:04:01
say, oh, that was the AI. Yeah,
1:04:03
don't blame me. Blameless corpus of data
1:04:05
that was harvested from the internet. Which
1:04:08
is why I plan, if anyone objects
1:04:10
to any segments that we do on
1:04:12
the show today or in the future,
1:04:14
I do plan on blaming Chad Chippett.
1:04:17
That was the Chad Chippett's idea. Yeah,
1:04:19
interesting. If it's a good segment, I
1:04:21
did it. If not? It was Claude.
1:04:23
It was Claude. A listener named Katia,
1:04:25
who's from Switzerland, she told us about
1:04:28
how looming deadline pressure caused her to
1:04:30
maybe over defer to AI outputs. She
1:04:32
wrote, quote, last semester I basically did
1:04:34
an experiment on this myself. I was
1:04:36
working on a thesis during my master
1:04:39
studies and decided to use some help.
1:04:41
My choice fell on cursor, which is
1:04:43
one of these AI coding products. She
1:04:45
writes. Initially I intended using it for
1:04:48
small tasks only just to be a
1:04:50
bit faster, but then the deadline was
1:04:52
getting closer, panic was setting in, and
1:04:54
I started using it more and more,
1:04:56
the speed was intoxicating, I went from
1:04:59
checking every line of code to running
1:05:01
rounds of automatic bug fixing without understanding
1:05:03
what the problems were or what was
1:05:05
being done. So I actually think this
1:05:08
is the most important email that we've
1:05:10
gotten so far because it highlights a
1:05:12
dynamic that I think a lot of
1:05:14
people are going to start feeling over
1:05:16
the next couple of years, which is
1:05:19
my bosses have woken up to the
1:05:21
fact that AI exists. They're gradually raising
1:05:23
their expectations for how much I can
1:05:25
get done. If I am not using
1:05:27
the AI tools that all my co-workers
1:05:30
are now using, I will be behind
1:05:32
my co-workers and I will be putting
1:05:34
my career at risk, right? And so
1:05:36
I think we're going to see more
1:05:39
and more people do exactly what Kata
1:05:41
did here and just use these tools
1:05:43
like cursor. And while You know, to
1:05:45
some certain level, I think that's okay.
1:05:47
We've always used productivity tools to make
1:05:50
ourselves more productive at work. There is
1:05:52
a moment where you actually just stop
1:05:54
understanding. what is happening, and that is
1:05:56
a recipe for human disempowerment, right? At
1:05:59
that point, you're just sort of barely
1:06:01
supervising a machine, and the machine is
1:06:03
now doing most of your job. So
1:06:05
this is kind of like a small
1:06:07
story that I think contains a dark
1:06:10
warning about what the future might look
1:06:12
like. Yeah, I think that kind of
1:06:14
mental outsourcing does worry me, the sort
1:06:16
of autopilot of human cognition. An analogy
1:06:18
I've been... thinking about recently and trying
1:06:21
to distinguish between tasks that we should
1:06:23
outsource to AI and tasks that we
1:06:25
probably shouldn't is forklifting versus weightlifting. Okay,
1:06:27
tell me about this. So there are
1:06:30
two reasons that you might want to
1:06:32
lift heavy things. One of them is
1:06:34
to get them from point A to
1:06:36
point B for some like, you know,
1:06:38
purpose, maybe you work in a warehouse.
1:06:41
Obviously you should use a forklift for
1:06:43
that, right? Salutary benefit to carrying heavy
1:06:45
things across a warehouse by yourself. And
1:06:47
that's very slow, it's very inefficient, and
1:06:50
the point of what you're doing is
1:06:52
to try to get the thing from
1:06:54
point A to point B. Use a
1:06:56
forklift for that. Weightlifting is about self-improvement.
1:06:58
Weightlifting is, yes, you could use a
1:07:01
machine to lift this heavy object, but
1:07:03
it's not going to make you stronger
1:07:05
in any way. The point of weightlifting
1:07:07
is to improve yourself and your own
1:07:09
capabilities. When you're in a situation where
1:07:12
you have the opportunity or the choice
1:07:14
of using AI to help you do
1:07:16
some task, I think you should ask
1:07:18
yourself whether that task is more like
1:07:21
forklifting or more like weightlifting and choose
1:07:23
accordingly. I think it is a really
1:07:25
good analogy and people should draw from
1:07:27
that. I want to offer one last
1:07:29
thought of my own Kevin, which is
1:07:32
that while I think it is important
1:07:34
to continue this conversation of How is
1:07:36
AI affecting my critical thinking? I think
1:07:38
in this last anecdote, we see this
1:07:41
other fear being raised, which is, what
1:07:43
if the issue isn't, do I still
1:07:45
have my critical thinking? skills and what
1:07:47
of the actual question is do I
1:07:49
have time to do critical thinking? Because
1:07:52
I think that one effect of these
1:07:54
AI systems is that everybody is going
1:07:56
to feel like they have less time.
1:07:58
The expectations on them have gone up
1:08:00
at work. They're expected to get more
1:08:03
done because people know that they have
1:08:05
access to these productivity tools. And so
1:08:07
you might say, you know what, I
1:08:09
actually really want to take some time
1:08:12
on this and I don't want to
1:08:14
turn to the LLM and I want
1:08:16
to bring my own human perspective to
1:08:18
this and you're going to see all
1:08:20
your coworkers not doing that. And it
1:08:23
is just going to drag you into
1:08:25
doing less and less of that critical
1:08:27
thinking over time. So while I think,
1:08:29
you know, is AI making me dumber
1:08:32
is a really like. interesting and funny
1:08:34
question that we should keep asking. I
1:08:36
think am I going to have the
1:08:38
time that I need to do critical
1:08:40
thinking might actually be the more important
1:08:43
question. Yeah, that's a really good point.
1:08:45
All right, well that's enough critical thinking
1:08:47
for this week. I'm going to go
1:08:49
be extremely ignorant for the next few
1:08:51
days if that's okay with you Kevin.
1:08:54
That's fine by me. News
1:09:12
never stops and neither does business.
1:09:14
Advanced solutions from Comcast business help
1:09:16
turn today's enterprises into engines of
1:09:19
modern business. It's how financial firms
1:09:21
turn into market tracking, client servicing,
1:09:23
cyber securing, economic engines. With leading
1:09:26
connectivity and networking, advanced cybersecurity and
1:09:28
expert partnership, Comcast business is powering
1:09:30
the engine of modern business. Powering
1:09:33
possibilities. Restrict possibilities. Restrictions apply. This
1:09:35
podcast is supported by delete me.
1:09:38
Protecting yourself against fraud, harassment, and
1:09:40
identity theft is something everyone needs
1:09:42
to think about. Data brokers bypass
1:09:45
online safety measures to sell your
1:09:47
name, address, and social security number
1:09:49
to scammers. Delete Me. Scowers the
1:09:52
web to find and remove your
1:09:54
information before it gets into the
1:09:56
wrong hands. With over 100 million
1:09:59
personal listings removed, delete Me. is
1:10:01
your trusted privacy solution for online
1:10:03
safety. Get 20% off your delete
1:10:06
me plan when you text fork
1:10:08
to 64,000. Text fork to 64,000.
1:10:11
Message and data rate supply. Heart
1:10:13
fork is produced by Rachel Cohn
1:10:15
and Whitney Jones. We're edited this
1:10:18
week by Matt Killett. We're fact-checked
1:10:20
by Ena Alvarado. Today's show is
1:10:22
engineered by Elissa Moxley. Original Music
1:10:25
by Mary and Marianne. Our audience
1:10:27
editor is No Galoebly. Video production
1:10:29
by Chris Shot, Sawyer Roquet, and
1:10:32
Pat Gunther. You can watch this
1:10:34
whole episode on YouTube at youtube.com/Hartford.
1:10:37
Special thanks to Paul Shuman, Pui
1:10:39
Wingtam, Dahlia Hadad, and Jeffrey Miranda.
1:10:41
You can email us at Hartford
1:10:44
at at at Y times.com or
1:10:46
if you're planning a military operation,
1:10:48
just add us directly to your
1:10:51
signal chats. You
1:11:10
know where your business would be
1:11:13
without you. Imagine where it could
1:11:15
go with more of you. Well,
1:11:17
with Wix, you can create a
1:11:19
website with more of your vision,
1:11:21
your voice, your expertise. Wix gives
1:11:23
you the freedom to truly own
1:11:26
your brand and do it on
1:11:28
your own, with full customization and
1:11:30
advanced AI tools that helped turn
1:11:32
your ideas into reality. Scale up
1:11:34
without being held back by cookie
1:11:37
cutter solutions and grow your business
1:11:39
into your online brand. Because without
1:11:41
you, your business is just business
1:11:43
as usual. Take control. Go to
1:11:45
wicks.com.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More