Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Hey upstream listeners, today on
0:02
upstream we're revisiting this classic
0:04
interview with Kevin Kelly, founding
0:06
executive editor of Wired Magazine,
0:08
Technologist, and author. This is a discussion
0:10
I turn to repeatedly. Not just for
0:12
Kelly's concise wisdom and optimism, but especially
0:14
now as we navigate how far technology
0:17
will take us in the age of
0:19
AI. Up ahead, you'll hear Kelly discuss
0:21
what makes us uniquely human and how
0:23
we've invented our humanity. Please
0:25
enjoy. Kevin,
0:41
welcome to the podcast. Excited to have
0:43
you on. Oh, it's always a pleasure
0:45
to chat with you, and I'm really
0:47
delighted to be invited back. Thank you.
0:49
We're excited to dive into your new
0:51
book, which is an advice book, different
0:54
kind of book. You read all advice
0:56
books that are written a long time
0:58
ago, and a lot of those principles
1:00
still make sense. Are there certain
1:02
genres of advice that you're inspired
1:05
by, or see yourself in the lineage
1:07
of, or how do you think about
1:09
advice as a category? you know, vice
1:11
columns. A lot of advice is wrapped
1:13
very, very smartly, I say,
1:15
in stories. We communicate
1:17
best in stories. And most
1:19
of vice books will have stories
1:21
about these things. And that is a
1:24
really great way to convey it
1:26
because you're kind of showing it
1:28
rather than telling it. And then
1:30
you will remember it. I'm not a
1:32
very good storyteller. So I
1:35
decided to do it my way, which is
1:37
these telegraph. aphorisms, these
1:40
adages, these lessons, these little
1:42
bits, these proverbs, which
1:44
is suits me and is how I
1:46
like to consume the advice. I like
1:49
to collect little quotes and
1:51
things and they work for me
1:54
as reminders. So that's what
1:56
I produced because I'm no good
1:58
in making a story. but
2:00
I can make a little bit
2:02
of viral tweeting and so that's
2:05
what I did. Let's go through
2:07
some of them. You mentioned gratitude.
2:09
You have one, gratitude will unlock
2:12
all other virtues and it's
2:14
something you can get better at.
2:16
Talk about how it unlocks other
2:18
virtues and talk about how you
2:20
get better at. I think in a weird
2:23
way, I think like gratitude
2:25
and trust and empathy are
2:27
connected. There almost maybe.
2:29
different faces of the
2:32
same virtue in some way,
2:34
where gratitude you kind
2:36
of are acknowledging your
2:39
unspecialness, your luck,
2:41
your acknowledging that you
2:44
did not earn in a certain
2:46
sense, that it was a gift.
2:48
And trust is in some
2:50
ways in my own mind connected
2:53
in that same way, or
2:55
gratitude is a
2:57
form of trust, maybe same.
2:59
or is this form of empathy,
3:02
where you're kind of being
3:04
able to put yourself in
3:06
someone else's position? So in
3:09
a way that I haven't
3:11
really articulated, I think those
3:13
three qualities are very
3:15
much joined in a way, and
3:18
they're kind of all
3:20
expressions of something similarly
3:22
deep in this kind of
3:24
connection that we have. not just
3:27
in a superficial way
3:29
between people, but that
3:31
we are actually of the
3:33
same life. I mean, the life that
3:35
we share is the same
3:37
exact life as we have
3:40
a common ancestor, no matter
3:42
who we are. And so I think
3:44
that's what I'm trying to
3:46
get at is that, you
3:48
know, being thankful for what
3:50
it is for our lives
3:52
or for what we have, which
3:54
are given to us, Very similar to
3:56
the kind of trust that we would have another person
3:59
that they're going to do well, which is kind
4:01
of a, the same kind of sense of
4:03
us having empathy for another, saying I'm
4:05
connected to you, we are in this
4:07
together, there is something in common that
4:09
we have, and when you work on
4:12
gratitude, it's kind of a way to
4:14
work on those other ones at the same
4:16
time. There's this old, my angelo quote,
4:18
of people will forget what you say,
4:20
but they'll remember how you made them
4:22
feel. And I used to think that was
4:25
kind of like a bug, like, isn't substance
4:27
really important, but People are
4:29
operated that way and you have an
4:31
aphroism here. You know, when choosing between
4:33
being right and being kind, choose being
4:35
kind. And it just kind of speaks
4:38
to what people really care about is,
4:40
are you with them? Are you part of
4:42
the tribe? Are you aligned with them? And,
4:44
you know, arguments are abstract. They're
4:46
kind of changing. They're evolving. I'm taking
4:48
a little bit of an abstract kind
4:50
of meta commentary on why that might
4:52
be true. Why people care more about
4:54
how you make them feel than what
4:56
you actually say, what you actually say. or
4:58
why they care more about, you know, being
5:00
kind of being right. It's sort of
5:03
like, why do we remember smell so
5:05
much, I think, because there is a
5:07
way in which they're kind of, they're
5:09
wires further down into our
5:12
brainstem, the experiences, it's fundamental.
5:14
So, so it's not like there's,
5:16
in one of the emotional components,
5:18
sort of like, not an overlay,
5:21
it's more like an underlay. Yeah,
5:23
the way that we have these
5:25
associations that they color everything, so
5:27
we can have. trigger responses that
5:29
are way below or way faster than
5:32
our intellectual intelligence.
5:34
I think we tend to
5:36
overrate intelligence. There was a whole
5:38
bunch of studies done about
5:40
people making decisions and then
5:42
how it reflected or didn't
5:44
their gut responses where they
5:47
would make a decision without thinking,
5:49
like on first impressions, and
5:51
then they would correspond that
5:54
with whether they were correct or
5:56
not correct or not correct or
5:58
not. Also particularly long-term things. but
6:00
character evaluations and
6:02
their first impression,
6:04
were usually the most accurate
6:07
impression before they even
6:09
applied logic, was faster than
6:11
logic. So I think there's something
6:13
going on about that, the
6:15
way they were constructed, where
6:18
the primeval circuits of first
6:20
impressions and responses and
6:23
feeling, they kind of work
6:25
faster. They work faster. They work
6:27
stronger. than the rational thing.
6:29
And a lot of what our humanity
6:32
is about is trying to
6:34
not just overcome that, but I
6:36
would say manage it or steer
6:38
it or the times elevate
6:40
it because a lot of times
6:43
it's absolutely right. And lots
6:45
of times it's not the best
6:47
thing. I mean, we have anger
6:49
and other emotions that are also
6:51
kind of have the same kind
6:53
of power. So yeah, so for
6:55
whatever reason we're governed by. by
6:58
these things that seem to trump
7:00
what the logic says and that's
7:02
another piece of advice about you
7:04
know arguing can't rationally argue
7:07
something somebody out of
7:09
an opinion that they didn't rationally
7:11
or you know get into through
7:14
rationality and so when you're
7:16
having disagreements the emotional
7:19
component is incredibly
7:21
important in changing by
7:23
someone's mind or convincing them
7:26
or persuading them or Yeah, you
7:28
can't reason someone out of a notion. They
7:30
didn't reason into it. I was gonna bring
7:32
that up. It's fascinating. And what you're usually
7:34
arguing about is not actually the thing that
7:37
you're arguing about. There's often a subtext to
7:39
the argument. And I'm curious, is your mental
7:41
model of emotions something along the lines of
7:43
almost feedback loops of what we're more likely
7:46
to help us thrive within our tribes like,
7:48
you know, a thousand years ago or whenever
7:50
it was? use some of those sort of
7:52
emotions are still adapted to modern environment some
7:55
of them are no longer adapted but is
7:57
that fundamentally what they were doing was kind
7:59
of intuitions of about what was more like
8:01
to raise our reputation in a hard
8:03
tribe or? Yeah, I think so. That
8:05
if you look at, there was some anthropological
8:08
work studying the last,
8:10
you mean the intact tribal people.
8:12
And it was really, some of this,
8:14
we went very deep and lived with
8:17
them for a long time and got
8:19
to know them very well and were
8:21
able to observe them for long
8:23
periods of time in all their dimensions.
8:25
It was really, really fascinating.
8:27
There was, I think Colin
8:29
turn. Turnbull, one of the
8:31
famous anthropologists, lived with, at the
8:34
time they were called Pygmy People, I
8:36
don't know the proper current, proper name
8:38
for them. And what was interesting
8:40
was, they lived in these grass leaf, leaf,
8:43
leaf, huts, basically had no
8:45
walls. Everybody heard everything in
8:47
the little clan in the settlement. They
8:49
would have these, like any humans,
8:51
like any humans, they would have
8:53
these, like any humans, they would have
8:55
these, like any humans, they would have
8:58
these, arguments and fights
9:00
over all kinds of you know things
9:02
that people argue and fight
9:04
about but they're really there
9:06
but but the and everybody could
9:08
hear everything and they could hear
9:11
those both sides and the whole
9:13
drama but the the interesting
9:15
was well while they could be
9:17
very very vocal and they almost
9:19
get to the point where they're
9:21
like fist fights and almost
9:23
hurting each other but they would
9:26
never go all that way
9:28
of really entering one another.
9:30
It was a lot of releasing.
9:33
It was kind of like
9:35
therapeutic fighting in a
9:38
certain sense, but they
9:40
always had certain lines
9:42
and the lines was they
9:44
would never do anything
9:46
that jeopardize the health
9:49
of the group. Okay. And
9:51
so there's something sociological
9:54
in us as a.
9:56
social animal
9:59
that as primates,
10:01
that even though we
10:03
have individuality, we've
10:05
involved to also pay
10:08
close attention to the health
10:10
of the group and the
10:12
status of the group. And
10:14
so maybe that's what makes
10:17
us kind of unique is
10:19
that we're social animals
10:21
as well as individuals. And
10:23
we have these competing instincts.
10:26
You have a frozen here.
10:28
A great way to understand
10:30
yourself is to seriously reflect
10:32
on everything you find irritating
10:34
in others. What do you impact that
10:36
one? Yeah, it has to do, and some of
10:39
the other ones, particularly these
10:41
days, with our new invention of AI, you
10:43
have to do with the fact that we
10:45
humans and individually are very,
10:48
very opaque to ourselves that
10:50
we... Collectively use the species.
10:52
We really don't know what makes us
10:54
human, how our minds work. That's
10:56
part of the thrill and excitement
10:59
and the worry about what's going on
11:01
with AI is that we're trying to
11:03
make something, replicate something that we
11:05
don't even know what it is,
11:07
our intelligence. But there's also
11:10
individually. This was one of the
11:12
lessons of the quantified self was
11:14
that we just don't have a very
11:16
good intellectual understanding
11:19
of. how our own minds work, how we
11:21
work, where we're coming from, why
11:23
we even make decisions that we
11:25
make. And looking at one of the
11:27
ways we can kind of look into
11:30
ourselves is find out what kind
11:32
of agitates us and what kind of,
11:34
where we care and where we're paying
11:37
attention to ourselves. Again, we're
11:39
kind of, that's another bit
11:41
like, pay attention to where you
11:43
pay attention to. And so our, you know,
11:45
consciousness is a very. Gossip
11:47
or a thing. It's a very slippery
11:50
thing. It's kind of like the
11:52
only tool we have to try
11:54
and probe us, but it's not
11:56
that dependable itself.
11:59
And so... So this is just another
12:01
way of paying attention to what
12:03
perks is another way to kind of
12:05
help us to get a view of us.
12:08
It's not necessarily saying you are
12:10
like that, just saying this is
12:12
a signal. This is another data
12:14
point. You can use to try and
12:16
dissect yourself because we are
12:18
really hard to self dissect.
12:21
Competing instincts, competing
12:23
emotions. People get anxious
12:25
about that, but that's kind
12:27
of the default state. you have
12:29
one, whenever you can't decide which
12:31
path to take, pick the one that
12:34
produces change. And I'm curious how
12:36
you balance that with sort of
12:38
like commitments and stick to whether
12:40
it's a partner or whether it's a
12:43
long-term job or. Well, yes, I think there
12:45
are, you know, it's a kind of elsewhere
12:47
that, you know, except for like marriage
12:49
always, it's always to think about
12:52
the exit first in some ways.
12:54
And whenever we're arguing a business
12:56
deal, it's exactly. where I start
12:58
with is, okay, what does the exit
13:01
of this look like? So we can
13:03
structure the beginning of
13:05
it. Yeah, so I think there
13:07
are some exceptions where you have
13:09
a commitment that's sort of
13:11
the nature of the commitment is like,
13:14
yeah, despite all the opportunities,
13:16
by all the other choices that
13:18
I'm not going to make that
13:20
choice. So I think there is, or
13:23
I think there is bounded in that
13:25
sense. But outside of that.
13:27
boundary, there's still many choices that
13:29
we have where we have, you know,
13:31
turns occupation where we live. We'll continue
13:33
our interview in a moment after a
13:36
word from our sponsors. Even if you
13:38
think it's a bit overhyped, AI
13:40
is suddenly everywhere. From self-driving
13:42
cars to molecular medicine to business efficiency.
13:44
If it's not in your industry
13:46
yet, it's coming fast. But AI
13:48
needs a lot of speed and
13:50
computing power. So how do you
13:53
compete without costs spiraling out of
13:55
control? Time to upgrade to the
13:57
next generation of the cloud. Oracle
13:59
cloud infrastructure. or OSI. OSI is
14:01
a blazing fast and secure platform
14:03
for your infrastructure, database, application development,
14:05
plus all your AI and machine
14:07
learning workloads. OSI cost 50% less
14:09
for compute and 80% less for
14:11
networking. So you're saving a pile
14:13
of money. Thousands of businesses have
14:15
already upgraded to OSI, including Vodafone,
14:17
Thompson Reuters, and Suno AI. Right
14:20
now, Oracle is offering to cut
14:22
your current cloud bill in half
14:24
if you move to OSI for
14:26
new US customers with minimum financial
14:28
commitment. Offer ends March 31st. See
14:30
if your company qualifies
14:33
for this special offer
14:35
at oracle.com/turpentine. That's oracle.com
14:37
slash turpentine. Every time I hop
14:39
on the 101 freeway in SF, I
14:42
look up at the various tech billboards
14:44
and think, damn, turpentine should
14:46
definitely be up there. And
14:48
I bet I'm not alone. In
14:50
a world of endless scrolling and
14:52
ad blockers, sometimes the most powerful
14:54
way to reach people is in
14:57
the real world. That's where AdQuick
14:59
comes in. The easiest way to
15:01
book out-of-home ads, like billboards, vehicle
15:03
wraps, and airport displays, the same
15:05
way you would order an Uber.
15:08
Started by Insticard Alums, Adquick
15:10
was born from their own
15:12
personal struggles, to book and measure ad
15:14
success. Adquick's audience targeting,
15:17
campaign management, and analytics,
15:19
and analytics. Bring the precision
15:21
and efficiency of digital to the
15:24
real world. Ready to get your
15:26
brand the attention it deserves. QUICK.com.
15:28
Make the case for why
15:30
people should consider having more
15:33
kids than they think. Oh my
15:35
gosh. Well, first of all, there
15:37
is no overpopulation on this planet.
15:40
There are certainly places that
15:42
have seen too many people
15:44
to live, but there are plenty
15:46
of places where there's nobody.
15:49
So there's, so not only is
15:51
there not an overpopulation,
15:53
there's actually going to be
15:55
quite the reverse. in the next
15:58
50 years and beyond. where there
16:00
is simply going to be
16:02
under population, a population
16:04
collapse. So that should be
16:06
removed. That's not, should not
16:08
be anybody's reason. We're not having
16:11
kids. And maybe even a reason,
16:13
if you care about the long-term
16:15
human species, because for every
16:17
woman who doesn't have any
16:19
children, another woman has four,
16:22
to just keep the current level,
16:24
or whatever level it is,
16:26
replacement level. But beyond that, I
16:28
think, I am trying to recall,
16:31
but I don't think if ever
16:33
anybody who had regretted having
16:35
more children. I would say modern
16:38
times, or in kind of the
16:40
developer. Certainly, there were people
16:42
who didn't have birth control, and
16:45
that might not be true, but I
16:47
would say in the modern developer,
16:49
I have not many people there.
16:51
And that's also been my own
16:53
experience. And so... I
16:56
would say that there
16:58
is a compounding
17:01
joy from it. So
17:03
the people who
17:06
benefit from it
17:08
is the kids
17:10
themselves, where we
17:13
did have her son
17:15
keep requesting that
17:18
my wife lay
17:20
another brother. They
17:23
lay him or brother. in
17:25
my group with five. And
17:27
so we know the joy of that. And that
17:29
was, I think, that to me is kind
17:31
of like one of the most
17:33
important ones. How do you think
17:36
we address the population collapse?
17:38
Because it feels like it's already
17:40
priced in. You can't just start
17:42
having 18-year-olds. Is it that robots
17:45
and AI just kind of stalled
17:47
for some of the labor stuff? Or
17:49
how do you think that that
17:51
plays out? I don't know how
17:53
it plays out. It's really perplexing
17:56
to me and I
17:58
do know from all
18:00
these solutions in terms of
18:02
having human replacement
18:05
to raise birth rates has not
18:07
worked, including just directly
18:09
paying people. Now, it may be
18:12
that they haven't been offered enough
18:14
sums that maybe if you do
18:17
calculations and you figure
18:19
that every baby born is
18:21
worth a million dollars to
18:23
the economy over its lifetime,
18:25
that it's worth to pay. a
18:27
million dollars, so maybe at
18:29
the right price, it would begin
18:32
to work. So far, I don't
18:34
know what that is, and if
18:36
there isn't a way to do
18:39
that, the question is, well, what
18:41
happens to the economy? And
18:43
there, you could imagine AIs
18:46
becoming, in some ways, the
18:48
audience in the market, in the
18:50
way that, you know, we build
18:53
houses for cars, we
18:55
call garages. Maybe we,
18:57
you know, make entertainment for
18:59
the AIs in some ways. Maybe we
19:01
have to build houses for robots,
19:03
pools, I don't know. So,
19:05
that's possible. That's one possible
19:07
way out. But it is something that
19:10
I don't have very many
19:12
good ideas about. Let's go
19:14
out more of the AI
19:16
human relationship because I've heard
19:18
you, you're famously an optimist.
19:20
And you're excited about where
19:22
things will take us. And
19:24
I'm curious. because some people
19:26
will say, hey, they're optimists, but
19:28
they also believe that, you know,
19:30
humans one day will be subservient
19:33
or to the AI in the
19:35
same way that, I don't know,
19:37
Neanderthals were kind of the previous
19:39
form of evolution. We're not the
19:41
last form of evolution. We're
19:43
not the last form of evolution.
19:46
We're not the last form of
19:48
evolution. We're not the last form
19:50
of evolution. And there will be, we're
19:53
not the last form of evolution.
19:55
I'm curious where you net out
19:57
of that. Well, yeah, I mean, maybe.
19:59
in between. So I first
20:01
I would concede or even
20:04
postulate that we've been our humanity
20:07
is malleable through something we've
20:09
invented and we're not done
20:11
yet. So we have been inventing
20:14
ourselves and we will
20:16
continue to change ourselves. And
20:19
so we will we want to become
20:21
better humans, but we don't
20:23
know what that is. And the
20:26
question that we also don't know
20:28
is is there multiple futures
20:30
for us? Okay, can we become
20:33
different kinds of humans? And
20:35
that prospect is really
20:37
very problematic in many ways.
20:39
We certainly could imagine there would
20:42
be varieties of us that are
20:44
not going to get anything
20:46
changed as naturals. They're
20:48
simply never going to doing
20:50
genetic engineering and not
20:52
permit very much alteration
20:55
in their bodies. And that's like
20:57
the Amish, right? And then we
20:59
can easily imagine people who are
21:01
going to be very quick to,
21:03
they want to eradicate the gene
21:05
for Alzheimer's and Parkinson's, from
21:08
their germline and all future
21:10
descendants tomorrow. You know, then we
21:12
could see a forking, at least one
21:14
fork, maybe more, going on and I
21:16
don't know what to think about that.
21:18
That's, is that, you know, is that
21:20
surrendering? Is it kind of like the
21:23
union of the United States? You can't
21:25
succeed. You can't succeed? There can
21:27
be no divisions, we must
21:29
remain one. So I can easily imagine
21:32
a bunch of people believing that,
21:34
that the worst fate would be to have
21:37
speciation among humans. And
21:39
others who would say, no, this is actually
21:41
natural, until recently there
21:43
were always other varieties of
21:46
humanoids on this planet, until
21:48
probably we got rid of
21:50
them, but then we don't know. And
21:52
so that was, so maybe there's kind
21:54
of like we're just returning to
21:56
the time. when we had multiple
21:59
sentient beings. which is still
22:01
a little different. I mean,
22:03
I call these AIs artificial
22:05
aliens and that's fine,
22:07
but that's not the same thing
22:09
as kind of allowing a forking
22:11
in your own species. The
22:14
danger there is, you know,
22:16
the most effective argument we
22:18
have against racism is
22:20
that there is no difference
22:22
between us. But what if it
22:24
really was? And that's like, what
22:27
do we do with that? And so,
22:29
there might be a very good reason
22:31
to not permit speciation. And there's
22:33
a few ways speciation
22:35
can happen, right? There's the genetic
22:38
engineering example, there's the, you know,
22:40
putting chips in people's brains, example,
22:42
there's the, you know, maybe the
22:44
chat GPT, you know, 50 or
22:46
whatever, just gets, gets, gets so
22:49
advanced that, you know, which speciation
22:51
versions do you feel are most
22:53
realistic or do you think all
22:55
of them will happen to something
22:57
greater? The cyborgian thing is going
23:00
to be very, very slow. You know,
23:02
I just recently went to see
23:04
the neural link stuff, where they have
23:06
the implant, but it's a science recorder,
23:09
but in the back of your head, and
23:11
it allows a monkey to control
23:13
the computer, but if they're
23:15
mine, and they're coming close
23:17
to actually doing human testing
23:20
on this for quadriplegics to
23:22
help them walk. And that was a
23:24
lot closer than I thought. So that's
23:26
how it starts. It starts with
23:28
a kind of a medical
23:31
therapeutic way to help
23:33
people. But that's, I'm not
23:35
sure, I mean, that will speed
23:37
up the evolution in certain
23:39
directions, but that's a very
23:42
very slow process. Whereas the
23:44
genetic germline thing
23:46
can happen, you know, at
23:48
the rate of human generations.
23:50
So I would, so I would
23:53
say in terms of like literally
23:55
speciating. and having different
23:57
varieties of humans. I
23:59
think the genetic side will be
24:01
the really the the moment the
24:04
one to pay attention to
24:06
and it can be informed
24:08
in terms of what we
24:10
discover from AI and other things in
24:12
terms of how to do it but
24:14
you know that's still a
24:16
slow process you know 25 years of
24:19
life for a human generation
24:21
to turn over it'll take a long
24:24
time before they're like you
24:26
can't breed kind of
24:28
speciation too. So that's kind of a
24:30
more technical biological thing is that
24:33
you can't breed. And I think
24:35
that's, I don't know if I, you
24:37
know, whatever be, consideration, we probably
24:39
can genetically engineer some way
24:41
to debris these hybrids. So,
24:43
speciation means only that more of
24:46
an identity, I guess. The LES, or
24:48
your calcium argument, as far as I
24:50
understand it, is something along the lines
24:52
of the idea that it's not that
24:54
AI needs to be conscious in order
24:56
to. get rid of us, it's
24:58
just, it needs to be a
25:01
different to us, which it is,
25:03
and then just say, or just
25:05
understand that our atoms are, you
25:08
know, something that they could use
25:10
to, or anything else, for it
25:12
to advance its own goals,
25:14
and then at some point
25:16
it will become in its
25:19
own interest to eliminate us.
25:21
Maybe I'm botching his
25:23
argument. How would you
25:25
comment on that? of the many
25:28
arguments out there
25:30
about existentialism, the
25:32
only one that sort of
25:34
makes sense to me as a
25:37
thing to worry about is that
25:39
basically that we over
25:41
time allow an engineer
25:43
AI to do more for us. And
25:45
at some point we give over
25:47
more and more of we voluntarily
25:51
engineer more and more
25:53
control to the A to the AIs.
25:55
on purpose to
25:58
run things. that
26:00
at that point of kind of giving
26:02
in power than we're at their mercy
26:04
should it awaken to to some kind
26:07
of sense of survival being
26:09
threatened. That's the closest I
26:11
can understand it. I
26:13
think it's a fantasy in many
26:16
many dimensions and one of them
26:18
is that's not going to happen
26:20
fast so the one fantasy is
26:22
that happens fast we can't back
26:25
out of it. Two, the fantasy is
26:27
that there's a single AI or
26:30
incredible collaboration among
26:32
several big AAs and there's
26:34
no evidence at all. The evidence
26:37
is going to be that there's
26:39
going to be millions, hundreds
26:42
of different AIs. So it's like
26:44
we have different engines and
26:46
tools and machines. We don't
26:48
have one big machine. And
26:51
then the third one, I think the
26:53
most serious fantasy part
26:55
is. this complete overestimation
26:58
of the role of intelligence
27:01
in achieving things, accomplishing.
27:03
There's this idea that
27:06
whatever is smartest will
27:08
dominate. But we know if you
27:10
put a human in a tiger in
27:12
a cage, we know which one's going
27:15
to live. It's not the
27:17
smartest one. There is just so
27:19
many other things that are
27:21
necessary to accomplish things in
27:24
the real world. Being the
27:26
smartest person in the room is
27:28
not necessarily mean that you are
27:30
the dominant person No matter how
27:32
smart you are you have other things to
27:35
you need other things that you've done
27:37
including Access to things and in
27:39
cooperation and collaboration many people
27:42
doing many different things or
27:44
many AIs or many things
27:46
and that middle-aged guys who
27:48
like to think a lot Really really
27:50
put a high emphasis on thinking
27:52
and they think that if they
27:54
could think faster and better
27:56
than they would be running
27:59
the world. And they can think of
28:01
all the ways in which they could do
28:03
usually figure out how they could
28:06
take over, but it's a fantasy
28:08
because that's not how the world
28:10
works. That's how reality works.
28:12
You have all these things and
28:14
they break down and they don't work
28:17
on the first attempt. And by
28:19
the way, the instinct for
28:21
survival usually will dominate
28:23
the, you know, the attempt to kill.
28:26
Survival is a more powerful motivator.
28:28
than trying to get rid of
28:30
somebody. I think it's a
28:33
fantasy kind of like Superman.
28:35
It's beautiful. It's miffic.
28:38
It makes it makes comic book
28:40
sense. But why isn't the
28:42
Neanderthals to humans kind
28:45
of like evolution? Why isn't
28:47
that a good analogy for what
28:49
could happen or will happen?
28:51
Well, well, yes. Okay. I think
28:53
it isn't analogy. And so
28:56
that took, I don't know. 10,000
28:58
years. Yes, if you would say that
29:00
over time that they could replace us
29:02
over time, I could buy that. But
29:04
to say that, you know, all at once
29:07
overnight, overnight, that's this comic
29:09
book, fantasy. Right. And we
29:11
do see, you know, in
29:13
time, there jumps in, you
29:15
know, sort of economic growth,
29:17
for example. So things aren't always
29:19
as linear, but even still, you
29:21
know, 10,000 years is a long
29:23
time. to expect this to happen
29:25
in 20 years. And the analogy
29:27
with Neanderthals, by the way, is
29:29
that much more likely had been
29:32
bred away rather than murdered. The
29:34
point is that there was a willing
29:36
merging going on. It wasn't that they were
29:38
killing us off. There was, and so in
29:40
that sense, it's like, yeah, we may merge with
29:42
them, but only if we decided it was
29:44
in our, you know, in our interest
29:47
to do so. And that's one
29:49
of the scenarios for Neandereros joined
29:51
us, rather than we murdered than we
29:53
murdered them. There's so, and it took
29:55
a long time. And if that's the
29:58
scenario that we're talking about. Okay.
30:00
We'll continue our interview in a moment
30:02
after a word from our sponsors. In an age of
30:04
AI and big data, your personal information has
30:06
become one of the most valuable
30:08
commodities. Data brokers are the silent
30:11
players in this new economy, building
30:13
detailed profiles of your digital life
30:15
and selling them to the highest
30:17
bidder. But data breaches happening frequently,
30:19
these profiles aren't just fueling spam.
30:21
They're enabling identity theft and putting
30:23
your financial security at risk. But
30:25
what if there's a way you
30:27
could opt out of that system
30:29
entirely. Incognite is a service that
30:31
ensures your personal data remains private
30:33
and secure. They handle everything from
30:35
sending removal requests to managing any
30:37
pushback from data brokers. What I
30:39
trust most about Incognite is how
30:41
it provides ongoing protection. Once you're
30:43
set up, it continuously monitors and
30:45
ensures your information stays off the
30:47
market. Want to extend that peace
30:49
of mind to your loved ones?
30:52
Incognomy's family and friends plan lets
30:54
you protect up to four additional
30:56
people under one subscription. Take control
30:58
of your digital privacy today. Use
31:00
code upstream at the link below
31:02
and get 60% of an annual
31:04
plan. Again, that's incognite.com/upstream. Hey everyone,
31:06
Eric here. In this environment,
31:08
founders need to become profitable
31:10
faster and do more with
31:13
smaller teams, especially when it
31:15
comes to engineering. That's why
31:17
Sean Lanahan started squad. A
31:19
specialized global talent firm for
31:21
top engineers that will seamlessly
31:24
integrate with your org. Squad
31:26
offers rigorously vetted top 1% talent that
31:29
will actually work hard for you every
31:31
day. Their engineers work in your time
31:33
zone, follow your processes, and use your
31:35
tools. Squad has front-end engineers excelling in
31:37
typescript and react, and NextGS, ready to
31:40
onboard to your team today. For back-end,
31:42
Squad engineers are experts at NojS, Python,
31:44
Python, Java, and a range of other
31:46
languages and frameworks. While it may cost
31:49
more than the freelancer on upwork billing
31:51
you for 40 hours, but working only
31:53
two, squad offers premium quality at a
31:55
fraction of the typical cost, without the
31:57
headache of assessing for skills and culture fit.
32:00
Squad takes care of sourcing, legal
32:02
compliance, and local HR for global
32:04
talent. Increase your velocity without amping
32:07
up burn. Visit choose Squad.com and
32:09
mention turpentine to skip the wait
32:11
list. You wrote a book a long time ago
32:13
called What Technology Wants? And you've written
32:16
a lot about the topic since then.
32:18
And obviously you've been thinking about technology
32:20
for, you know, many decades, both past
32:23
and future. Do you feel it's kind
32:25
of a precursor in some ways to...
32:27
market recent software is eating the
32:29
world thesis or how would you
32:32
kind of contextualize your your mental
32:34
model of of technology and and
32:36
where what it wants where it's going? I
32:38
mean does it completely take up do
32:40
we do we become kind of like
32:42
completely technical techno planet is that what
32:45
you mean by a takeover? We become
32:47
the tools our tools shape us literally?
32:49
Well yes so so in a certain
32:51
sense we are becoming as I influenced
32:53
we've invented our humanity we
32:56
invented ourselves we are in
32:58
invention and we will continue
33:01
to do that. But there are
33:03
many many attributes that we
33:05
have of living in these
33:07
kind of wet cell
33:09
biological self-reproducing
33:11
things that, you know, it's as
33:14
we're making these robots, we've
33:16
come to understand that the
33:18
kind of power density of
33:20
a human, we're quarter horsepower
33:23
in, you know, 60 watt brain. It's
33:26
going to be a long time before any
33:28
kind of a robot will be able to
33:30
operate at those low energy levels.
33:32
So yes, so we may continue diversals,
33:34
but we may not necessarily
33:37
diverge very much from this
33:39
miraculous machine that we have. I
33:41
think we're likely to populate
33:43
our surroundings in the environment
33:46
with all these artificial alien
33:48
beings of all different varieties
33:50
and stripes. But beyond that in
33:52
terms of the eye and just the
33:54
general drift of the techium which is
33:56
kind of to make it's basically just
33:59
to make organ and forms and
34:01
kinds of being that were possible
34:03
but not possible to be made
34:05
with living tissue. In the long trend,
34:07
you know, there's all these forms
34:10
that could be, that could be doing
34:12
things, but you're not gonna get there
34:14
if they have to make it in
34:16
our cells that are mostly made out
34:18
of water. You can get to those if
34:21
you can make them out of other
34:23
elements. And you can only get to
34:25
there through a mine. So we have. First
34:27
of human mind, then we're going to
34:29
make all these other AI minds that
34:32
will be inventing new ways to
34:34
make forms, new ways to make a
34:36
living, new ways to exist, to be,
34:38
that are technological, and
34:40
we can fill up that. I
34:42
don't think it's going to necessarily
34:45
replace biological forms,
34:47
because that's generally not
34:49
what we see. We see evolution
34:51
much more additive. There are things
34:53
that go extinct. However, in
34:55
technology we don't see extinction.
34:58
That's one of the differences,
35:00
because our idea base, and we can
35:02
carry ideas forward. So we have
35:04
shifted the evolutionary arc, because
35:06
now we don't have to have
35:09
as much extinction. And so we
35:11
can imagine going forward where we
35:13
retain as many of the
35:15
biological species as possible, while
35:18
adding on additional technological
35:20
species. And we make more and more.
35:22
So we have a world, like a planet
35:24
full of all the biological species that we
35:27
have today, and they continue, and
35:29
millions of more technological species.
35:31
And so that's what I think the
35:33
general pattern is. Yes, is the world filled
35:35
with all kinds of technologies that we don't
35:38
have today, but not at the cost
35:40
of the biological species. And
35:42
to make this a bit more concrete,
35:44
you said a few times, we invented
35:46
our community. unpack exactly what that means
35:48
in terms of like what were the
35:51
biggest inventions or just make that
35:53
a bit more concrete for us. One of
35:55
the our biggest inventions is the
35:57
invention of language, which we did
35:59
invent. I mean, it was
36:01
meaning that I think
36:03
that primitive, you know,
36:05
primates trying to communicate,
36:08
try things in their
36:10
mind, try to do
36:12
something, and they worked.
36:14
Those who were able
36:16
to do that survived
36:18
and try it again
36:20
to make something work,
36:22
and that kind of
36:25
that battery of things,
36:27
techniques that they discovered.
36:29
We're passed on and became the
36:32
basis of our language in that
36:34
language. What language really gives us,
36:36
there's two things. One is this
36:38
communication between members, which is very
36:41
powerful, and how we can hunt
36:43
better, so we can client could
36:45
survive. But there's something else really
36:47
important about language, which is that
36:50
it gave us access to our
36:52
own thinking. So the only way
36:54
we know what we're thinking is
36:56
through language. So the language was
36:59
a tool. two-prong thing that gave
37:01
us access to try and figure
37:03
out what we are. So it
37:05
was our origin or our consciousness,
37:08
basically. That was very, very powerful
37:10
because it kind of then could
37:12
give us to be purposes and
37:14
directions and intention that we didn't
37:17
have before. So that was something
37:19
we invented. And then from them
37:21
we invented things like domestication of
37:23
hurting animals, which we could milk,
37:26
and once we figured out that,
37:28
the human body. rapidly evolved in
37:30
certain populations, adult tolerance of lactose.
37:32
That happened within, I don't know,
37:35
7,000 years or something. It was
37:37
really fast. We invented cooking and
37:39
fire, which was an external stomach
37:41
that could digest stuff we could
37:44
not with our primate stomachs. We
37:46
could then access nutrition that we
37:48
couldn't get to, which changed our
37:50
teeth and jaws very, very, very
37:53
quickly. So we... So how we
37:55
look right now. is something that
37:57
we invented. And so we're continuing
37:59
to do that right now. The
38:02
actually biological evolution has not slowed
38:04
down with cultural evolution. It's actually
38:06
sped up. So that's what I
38:08
meant by inventing. And then all
38:11
the things that we think are
38:13
important to us, like fairness and
38:15
the moral judgment and all these
38:17
other things, we invented. Idea of
38:20
law and our idea of even
38:22
though there's some primitive traits. We
38:24
can see like fairness among primates.
38:26
We have invented elevated and much
38:29
more stricter levels of that. And
38:31
some of us communicated through culture,
38:33
which is something else we invented.
38:35
But when we think of a
38:38
elevated enlightened human, that's something that
38:40
we've invented. In one of our
38:42
first conversations almost a decade ago,
38:44
I remember you said something like,
38:47
we will need a new mythology
38:49
or new mythologies that helped us
38:51
kind of make sense of the
38:53
new world that we're entering. Yeah,
38:56
I mean, at the Low Now
38:58
Foundation, we're building this clock to
39:00
tick inside the mountain for 10,000
39:03
years, and that's, we hope that
39:05
to be a mythic thing, this
39:07
idea of the cock taking the
39:09
mountain for eons, for generations. That's
39:12
the kind of mythologies that I
39:14
think is helpful for us as
39:16
we. kind of reimagine ourselves. Yeah,
39:18
and our new purpose in this
39:21
world. It's interesting because, well, just
39:23
on the point of purpose, I
39:25
mean, some people say, hey, you
39:27
know, people will do art and
39:30
poetry and will find all these
39:32
new things once a lot of
39:34
knowledge work has been automated, but
39:36
the AI will be better at
39:39
that kind of stuff too. How
39:41
do you think about in a
39:43
world a decade from now or
39:45
two decades from now how humans
39:48
think about? purpose in a different
39:50
way once a lot of things
39:52
that they were the only to
39:54
use your words of you know
39:57
don't be the best be the
39:59
only could perhaps be be done
40:01
with a or feel free to
40:03
just be the premise. You know,
40:06
I just think we know so
40:08
little about what our own intelligence,
40:10
your own being is like. And
40:12
so AI is going to be
40:15
disruptive and instructive for decades because
40:17
it's going to help us experiment
40:19
on us in some ways. I
40:21
think we're going to learn more
40:24
about our own brains from AI
40:26
in making a than we have
40:28
from neuroscience or psychology together so
40:30
far. I think we're going to
40:33
be having this discussion about what
40:35
is we are about, where are
40:37
we going, what is it for
40:39
the next 50 years at least,
40:42
as all these things come along
40:44
and we kind of re-register them.
40:46
I think one of the things
40:48
that we're, one of the things
40:51
we've done just this year is
40:53
demoted creativity. Again, this has been
40:55
talked about as like, well, the
40:57
thing that humans do is creativity.
41:00
Computers are the opposite of that.
41:02
Now we know. wrong. Computers can
41:04
do creativity at a lower, minor
41:06
level very easily. And so now
41:09
we're saying, well, yeah, creativity is
41:11
not this high order supernatural thing.
41:13
It's actually very, very primitive. And
41:15
so we've kind of changed our
41:18
ideas about creativity very, very fast.
41:20
And I think that kind of
41:22
thing we're going to continue to
41:24
do, not always necessarily demoting and
41:27
just shifting and maybe having more
41:29
subtle new ones, because I think
41:31
there's what I call capital or
41:33
major creativity in the minor. So
41:36
we'll have to devise new language
41:38
for distinguishing between what this kind
41:40
of everyday novelty is versus a
41:42
kind of a breakthrough where you
41:45
are, we are trying to do
41:47
something that's outside the average. And
41:49
so we probably maybe have two
41:51
new words instead of just the
41:54
one word creativity. I'm fascinated by
41:56
the, you know, we'll learn more
41:58
about ourselves than we have with
42:00
neuroscience. and psychology, we still have
42:03
no idea what consciousness is, and
42:05
maybe we'll figure it out in
42:07
the process. One idea you've been
42:09
thinking a lot about for a
42:12
while is this idea of a
42:14
global government, or the need for
42:16
one, and also this idea of
42:19
co-balance, I think you call it,
42:21
there's the idea of kind of
42:23
two-way transparency. Well, yeah, it's not
42:25
surveillance, but co-violence. Yeah. And, you
42:28
know, it's interesting because when some
42:30
people talk about... existential risk, I
42:32
mean, one of the accidental risks
42:34
we've introduced in the last century
42:37
has been nuclear weapons, but the
42:39
competitive dynamic in terms of multiple
42:41
countries having nuclear weapons seems to
42:43
have staved that off. And so
42:46
I wonder if similar to, you
42:48
know, right now we're having conversation
42:50
about AI centralization and decentralization, and
42:52
if there should also be a
42:55
competitive dynamic, that might stave that
42:57
off. And one worry people could
42:59
have with the global government is,
43:01
you know, does that reduce power
43:04
a bit too much? and thus,
43:06
you know, why would they, you
43:08
know, be covalent, so to speak,
43:10
like, why would they allow, you
43:13
know, two-way transparency? How do you
43:15
think about sort of the, do
43:17
you see a need for decentralization
43:19
there, or do you agree that
43:22
competitive dynamic is what's preventing, you
43:24
know, perhaps this abusive power? Yeah,
43:26
I would say in general, what
43:28
we know about systems, again, me
43:31
taking the whole systems approach is
43:33
that there's a tremendous power in
43:35
the in the bottom. in a
43:37
very flat-ish, bottom-up, decentralized, distributed system,
43:40
which is a large part of
43:42
evolution, large part of ecology, a
43:44
large part of living systems, large
43:46
part of the mind. But it's
43:49
not the only part. And that's
43:51
the lesson is that most of
43:53
the systems we see are combinations
43:55
of... lots of decentralization in many
43:58
aspects and some centralization. What we
44:00
wait, we know that the... The
44:02
advantages of decentralized distribute systems
44:04
is agility, flexibility,
44:07
adaptability, supreme. They're
44:10
just the best ways to adapt
44:12
to changing environments,
44:14
changing circumstances, changing goals.
44:17
But we also know that
44:19
there's a cost to that. They're
44:22
incredibly inefficient. I
44:24
mean, just by the nature, you're
44:26
duplicating. thing. There's no
44:29
sense of efficiency whatsoever.
44:31
There's no mechanism for
44:33
efficiency. As soon as you
44:36
introduce efficiency, you begin
44:38
to centralized. So the question
44:40
always is there's tradeoffs
44:43
and all these systems being
44:45
hybrid is you're going to pay
44:48
the costs for certain aspects
44:50
of the decentralized
44:52
system, pay the cost
44:54
of inefficiencies and slowness
44:56
and other stuff. adaptability
44:58
or flexibility that it does
45:00
is so valuable that you're
45:02
willing to pay the costs and other
45:04
other aspects of it. It's not
45:07
worth paying the cost. You want to
45:09
have something more centralized. So
45:11
authentication could be
45:13
decentralized and there's reasons
45:15
to do that and there'll be some
45:17
cases where it's going to be
45:20
worth to pay the cost
45:22
of completely decentralizing it. But in
45:24
many other cases, it'll make sense
45:26
to have a more centralized version
45:28
of it, so there's no free lunch
45:31
in that way. The way I would say is,
45:33
you know, the bottom up is always
45:35
to take you further than you can
45:37
go, these centralized systems.
45:39
Take further you can go than
45:41
you ever thought, and they're usually
45:44
the best way to start, but
45:46
they don't take you all the way. And
45:48
so most of the systems that
45:51
are kind of highly boiled
45:53
and well working will be
45:55
some combination of... mostly decentralized
45:57
with little bits of
45:59
top-down centralized control. This has
46:01
been a fascinating conversation. I want
46:03
to close one more of your
46:05
book, which is you have this
46:08
one aphorism, which talks about instead
46:10
of thinking about can do or
46:12
can't do, think about I do
46:14
in terms of internalizing something identity.
46:16
This kind of meta concept, which
46:18
I'll say is a plug for
46:20
the book, which is there's this
46:22
basketball player, Giannes Antis Acublo, who
46:24
just lost in the first round
46:26
of the NBA playoffs. And before
46:28
the series started, if you were
46:30
to ask him, hey, you know,
46:32
if you guys lose this series,
46:34
is it a failure? He would
46:36
have said, yes, it's a failure.
46:39
We have to win at all
46:41
costs and got to have our,
46:43
you know, head of the game.
46:45
But then afterwards, when someone asked
46:47
him, you know, is this season
46:49
a failure, because they lost, he
46:51
said, no, it's a learning experience.
46:53
And he just talks about the
46:55
idea of like, certain proverbs might
46:57
serve you better in certain times.
46:59
And maybe life is about, you
47:01
know, you know, adopting the right
47:03
mindset. I think you agree. I'm
47:05
not sure if it's the same
47:07
guy, but there was somebody else
47:10
recently. He said, there's no failures
47:12
in sport. It's the same guy,
47:14
same guy. Yeah, I mean, that's
47:16
just very, very briefly, but that's
47:18
one of the base. One of
47:20
the newest, occasionally there are new
47:22
things under the sun, and there
47:24
are one new thing under the
47:26
sun that I think Silicon Valley
47:28
can take credit for is demoralizing
47:30
failure. To understand the failure is
47:32
seen now. as a necessary component
47:34
for science or innovation, for entrepreneur,
47:36
for the economy in general, and
47:38
that what you want to really
47:41
have are systems that manage failure.
47:43
Okay, the failure management systems where
47:45
you have your failures in small
47:47
doses and you manage them to
47:49
prevent the cataclysmic failures that you
47:51
want to avoid. And so I
47:53
think there's been a change, complete
47:55
sea change, where if you lost
47:57
money or if you had bad
47:59
grades or if a experiment failed
48:01
that was considered. a disgrace and
48:03
now is seen as you say
48:05
as a learning experience as a
48:07
way we go forward. So the
48:09
fail forward idea. So yes, fail
48:12
forward. That's a great place to
48:14
end. The book is excellent advice
48:16
for living. Wisdom, I wish I
48:18
had known earlier. Kevin, thanks so
48:20
much for writing this book and
48:22
for coming on the podcast. I
48:24
really appreciate your great questions, Eric.
48:26
Thank you. Upstream with Eric Tormburg
48:28
is a show from Turpentine. The
48:30
podcast network behind Moment of Zen
48:32
and Cognitive Revolution. Do you like
48:34
the episode, please leave a review
48:36
in the Apple Store. Hey listeners,
48:38
it's Eric. It's overwhelmingly clear that
48:40
health care needs fixing, but who's
48:43
actually doing it? The podcast's second
48:45
opinion brings you the problem solvers
48:47
making real progress across health tech,
48:49
biotech, AI, pharma, funding, and actual
48:51
medicine. Our host are subject matter
48:53
experts. Christina Farr is a leading
48:55
health tech journalist and investor. Dr.
48:57
Ashe Zanuz is a physician and
48:59
former tech CEO, and Luba Greenwood
49:01
is a biotech executive and lawyer.
49:03
Every week, they cut through the
49:05
hype to get at what actually
49:07
matters in health care's future. Get
49:09
the inside story on the second
49:11
opinion podcast. Search for it in
49:14
Apple Podcasts, Spotify, or your favorite
49:16
podcast app.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More