Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Hey upstream listeners, today we're
0:02
releasing a conversation from turpentine
0:04
show, Complex Systems, between host
0:06
Patrick McKenzie and entrepreneur and
0:08
investor Azimazar. They discussed the
0:11
history of data centers, the landscape
0:13
of energy infrastructure for AI,
0:15
the coming LLLM boom, and more. Please enjoy.
0:32
Hi Dho everybody, my name is Patrick
0:34
McKenzie, better known as patio 11 on
0:37
the internet, and I'm here with my buddy
0:39
Azimazar. It is so great to be here.
0:41
I love the name of your podcast. Oh,
0:43
thank you very much. So Azim
0:45
runs a newsletter called Exponential View
0:47
and we're going to be talking
0:49
about the power economics specifically that
0:51
of data centers today. People might
0:53
have heard recently in the news
0:55
that Open AI at all are
0:58
building a multi-billion dollar Stargate facility
1:00
down in Texas. People might have
1:02
heard some... sort of hand-wavy or
1:04
more evidence calculations that data center
1:06
usage is going to be tens
1:08
of percent of all power usage
1:10
in the near future. And I
1:12
think for folks inside the industry,
1:14
these are eye-popping numbers, but they're
1:17
somewhat excellent at numbers. For people
1:19
who are outside the industry, this
1:22
is all just a little bit
1:24
wild. So let's take the very
1:26
long view. How does this compare
1:28
to other infrastructure rollouts over the
1:30
last day? couple of centuries, and then
1:33
go into the nitty gritty. But
1:35
speaking of things that are a couple
1:37
of centuries old, we've been around this
1:39
computer thing for a while, haven't
1:41
we? We certainly have. I still
1:43
have my first computer. It's a
1:45
Z80 processor, and the computer's called
1:47
the ZX81, and I got it in
1:50
1981. So it's 43 years old. It's
1:52
going to be 44 this year, and
1:54
probably older than many of the listeners.
1:56
The first modem I ever used was
1:58
300 bits per second. for those of
2:00
you who have had cell phones
2:03
for your entire life. Anyhow, so
2:05
let's talk about slightly more ancient
2:07
infrastructure. So as we're doing this
2:09
build-out of data centers and the
2:11
electricity, both generation and transmission apparatus,
2:13
that powers them, I'm sometimes put
2:15
in mind of other societal-wide infrastructure
2:17
build-outs. What's one that comes to
2:19
mind for you? You know we
2:21
see these infrastructure build-outs every 50
2:23
to 60 years roughly speaking and
2:25
the really really big ones in
2:28
the US and in Europe were
2:30
for the railways and they were
2:32
for electrification just over a hundred
2:34
years ago or a hundred to
2:36
80 years ago in the US.
2:38
One of the interesting facts around
2:40
all of these build-outs is just
2:42
how significant they were. If you
2:44
look at the... build out of
2:46
the railways in Great Britain in
2:48
the 1840s to 1860s, roughly 6
2:50
to 7% of GDP her annum
2:53
went into capital investment to build
2:55
out those those rail lines, which
2:57
in US terms today, given the
2:59
size of US economy, would be
3:01
approaching kind of a trillion, a
3:03
trillion, a half dollars a year.
3:05
Whenever we see a general-purpose technology
3:07
like this, like artificial intelligence, like
3:09
the internet, like telecons and electricity,
3:11
or the railways, infrastructure needs to
3:13
get built, and the thumbs of
3:15
money that go into it are
3:18
always eye-popping. And so too, by
3:20
the way, are the dynamics of
3:22
over-investment, and then, you know, at
3:24
some point there being a bubble
3:26
that pops, because it's so hard,
3:28
as you know, to... forecast exactly
3:30
when demand and where demand will
3:32
arise. Mm-hmm. Vern Hobart has a
3:34
book called Boom with Stripe Press
3:36
that lays out this theory with
3:38
the school author Tobias, I believe.
3:40
And essentially he says that infrastructure
3:43
overinvestment tends to happen with bubble
3:45
dynamics, but that the bubble is
3:47
actually somewhat positive in that it
3:49
causes a shelling point of... and
3:51
different people firms, government entities, etc.
3:53
that all have different pieces of
3:55
the puzzle that would not coordinate
3:57
to do a nationwide or even
3:59
worldwide infrastructure revolution, but for somewhat
4:01
bubble dynamics. And then... often bubbles
4:03
pop because as you mentioned it
4:06
is virtually impossible to get these
4:08
questions right a priori but the
4:10
demand sort of back fills over
4:12
the intervening decades after the popping
4:14
of the bubble. I honestly don't
4:16
know there's substantial technical uncertainty etc
4:18
etc etc etc. This might be
4:20
the bubble that doesn't pop and
4:22
always dangerous say this time is
4:24
different but what I'm saying is
4:26
like there is some possibility that
4:28
there is a discontinuity here with
4:31
with previous infrastructural build outs but
4:33
for people who are our generation
4:35
who remember very keenly the dot-com
4:37
bust happening. People remember the dot-com
4:39
bust as being about the application
4:41
layer about webvan and pets.com. We
4:43
had great.com domains back in those
4:45
days, but they remember that, but
4:47
by count of money invested, almost
4:49
all of it was doing coast-to-coast
4:51
fiber and copper rollouts, and that
4:53
substantially... enabled the development of the
4:56
modern internet and its use as
4:58
e-commerce, etc. platforms. Yeah, I agree
5:00
with that. And in fact, the
5:02
telecoms bubble is really salutary. The
5:04
total investment in telecoms infrastructure you
5:06
described at kind of coast-coast, but
5:08
there was stuff happening outside the
5:10
US as well, was, as I
5:12
recall, around $600 billion between 19
5:14
and 96 and 2001. and US
5:16
telecoms firms alone took on about
5:18
350 to 370 billion dollars of
5:21
debt in order to do this.
5:23
And they did it at a
5:25
time when the telecoms market was
5:27
growing at about 7% per. Anum.
5:29
And so what feels really different
5:31
about this AI market is that
5:33
firms are growing much, much faster
5:35
than that. I mean, 7% per
5:37
week is not unheard of right
5:39
now. And you're getting these early
5:41
stage startups that like Curser, which
5:43
are getting to $100, $150 million.
5:46
of annual recurring revenue within 18
5:48
months or so, which is an
5:50
absolutely dramatic, dramatic result. And in
5:52
fact, six months ago, we were
5:54
looking at data that showed that
5:56
AI-based software startups were growing three
5:58
times faster than fast-growing SAS startups
6:00
in the pre-AI era, and that's
6:02
compressed even further. And the reason
6:04
that's different to the telecoms infrastructure
6:06
build-out is that the revenues... are
6:08
probably ramping faster than the infrastructure
6:11
growth is ramping, whereas the reverse
6:13
was true back in the telecoms
6:15
bubble, which has been relabeled the
6:17
dot-com bubble by some historians. Yeah,
6:19
so I'm also hearing reports of
6:21
this anecdotally. Don't treat the following
6:23
as... a quote from an informed
6:25
market participant but just treated as
6:27
a bit of market color on
6:29
the grapevine. People are saying that
6:31
firms which are at about stage
6:34
for raising investment in Silicon Valley
6:36
and typically at that point you
6:38
continue to be quickly growing but
6:40
are sort of reinvesting in processes
6:42
to support the next 10X over
6:44
the next couple of years and
6:46
getting let's say a little less
6:48
cowboy about the operations. There are
6:50
people who are saying that they
6:52
are seeing firms at the C-round
6:54
stage that are growing as quickly
6:56
as Y-cominator companies that are, you
6:59
know, double-digit weeks old with 10%
7:01
week over wreath growth, etc. and
7:03
not growth in a vanity metric
7:05
or growth in eyeballs for reviewing
7:07
cap videos, but growth in enterprise
7:09
spend on AI capabilities. Which to
7:11
the extent that one trusts that
7:13
observation is mind-blowing? It's absolutely wild,
7:15
and I have heard similar things
7:17
as well, so presuming we're not...
7:19
hearing the same rumor sourced from
7:21
the same person. Let's assume that
7:24
these are coming from different places.
7:26
That seems to match somewhat with
7:28
what I hear as well. And
7:30
you can see it in from
7:32
other data sets. So there is
7:34
a platform called OpenRooter, which has
7:36
access to a whole bunch of
7:38
LM APIs. data, they, when I
7:40
looked at it, I saw an
7:42
8x growth in token usage, as
7:44
in tokens served by a set
7:46
of these models, over a somewhat
7:49
less than one year period. And
7:51
so we're going from, you know,
7:53
a few hundred million tokens per
7:55
month to, you know, several billion
7:57
tokens per month, and of course,
7:59
while token prices have come down,
8:01
that is showing that elasticity of
8:03
demand and the fact that the
8:05
demand exists. and the demand is
8:07
not being saturated at all. So
8:09
I sometimes feel that we are
8:11
the blindfolded men walking around this
8:14
elephant and we have to sort
8:16
of put the full picture of
8:18
what's happening in the market together.
8:20
And I don't really see many
8:22
of the feelers that suggest we're
8:24
not dealing with something that's big
8:26
and fast growing. I mean, typically
8:28
every conversation I have, sometimes a
8:30
bit skeptical about them. points to
8:32
the kind of dynamic that you
8:34
talked about, which is that the
8:36
growth is really, really fast. It
8:39
continues to be fast. And, you
8:41
know, as prices come down, spend
8:43
goes up because you can, you
8:45
know, basically make the economics work
8:47
on newer use cases. I also
8:49
think that people, there's a comparative
8:51
advantage in having seen the dynamics
8:53
of SAS companies scale in understanding
8:55
how this is going to scale,
8:57
because I think some of the
8:59
best informed people with regards to
9:02
what LLLM's can actually do these
9:04
days are folks that are playing
9:06
with them every day. But if
9:08
your view on an LLLM's capability
9:10
is I chat with Claude all
9:12
the time, he seems very emotionally
9:14
supportive. I've done this sort of
9:16
song generation and Suno, which is
9:18
a wonderful experience by the way.
9:20
You're probably not predicting what those
9:22
capabilities in an API plus a
9:24
two to five year enterprise integration
9:27
cycle looks like because after that
9:29
exists it's not going to be
9:31
you know you invoking one LLLM
9:33
at a time for a couple
9:35
hours per day it will be
9:37
everybody getting a staggering number of
9:39
LLLM invocations on their behalf every
9:41
day most in the background where
9:43
the rate model isn't a person
9:45
having a conversation with some, it's
9:47
more similar to what happens when
9:49
you open up the New York
9:52
Times and several hundred robots conduct
9:54
an instant auction for your attention
9:56
on your behalf. And there's an
9:58
entire ecosystem affirms mad tech that
10:00
make that happen and you will
10:02
never know most of their names.
10:04
Anyhow. Yeah, well, can I add
10:06
to that? I mean, we already
10:08
got that crossing point on web
10:10
traffic a couple of years ago
10:12
where 50% of traffic generated is
10:14
bots traffic doing various things like
10:17
that. And within. you know, processes
10:19
within the enterprise, there are so
10:21
many advantages in having LLLM's or
10:23
AI systems talk to each other,
10:25
I mean, fundamentally because they can
10:27
be much faster than we can.
10:29
And that's one of the things
10:31
that emerges when you see these
10:33
sort of optimized distilled LLLM's running
10:35
at a thousand tokens per second,
10:37
which is that is really, really
10:39
significant in terms of, you know,
10:42
ingesting information, making a decision on
10:44
that information, and sending a signal
10:46
back out to the next step
10:48
in the process at millisecond speed,
10:50
whereas humans work at, you know,
10:52
minute speed, and at a thousand
10:54
tokens a second rather than five
10:56
to eight tokens a second, which
10:58
may be where we might sit
11:00
at the very, very best of
11:02
times when we're reading something, let's
11:04
learn when we're writing something. And
11:07
that velocity, I think, in of
11:09
itself, will breed, will breed faster.
11:11
you know, faster velocity. And that's
11:13
why I love the idea of
11:15
this being a complex system and
11:17
the complex systems podcast, because that
11:19
is ultimately a complex system, right,
11:21
with all of these feed-forward loops
11:23
and flywheels. Yeah. And if I
11:25
can give people concrete examples of
11:27
what is going to happen very,
11:30
very quickly. If you're a kind
11:32
of sewer of the fictional experience
11:34
of watching rich people talk to
11:36
each other, a line that you
11:38
hear a lot in those... movies
11:40
and etc. is let's have your
11:42
people talk to my people where
11:44
the two principles are aloof from
11:46
their own calendar management but there's
11:48
some implicit team in the background
11:50
that takes care of that for
11:52
them. Let's have your alum talk
11:55
to my LLLM is already happening
11:57
and will certainly happen in the
11:59
future in every conceivable way. As
12:01
an example I recently a push
12:03
notification, email, and paper letter, all
12:05
from a bank, asking me, hey,
12:07
we haven't seen you update your
12:09
address with us in a while.
12:11
We need to be on top
12:13
of your address. And so do
12:15
you have an update for us?
12:17
If not, just tell us so.
12:20
And that is totally going to
12:22
be an alum-driven conversation in the
12:24
future. You can optimize out the
12:26
stamp entirely, the annoying interaction with
12:28
customer. And also optimize out probably
12:30
all the time on the bank
12:32
side. given the nature of the
12:34
current process. Yeah, can I give
12:36
you a really concrete example that
12:38
I use as well? Sure. One
12:40
of the tools that I use
12:42
is a network of LLLM. So
12:45
I'll have a single LLLM that
12:47
acts as a kind of orchestrator
12:49
and I'll have several other LLLM
12:51
that acts as members of a
12:53
focus group. And I will put
12:55
my question in to the orchestrator.
12:57
and the orchestrator will pass that
12:59
question which is sort of evaluate
13:01
this idea or evaluate this product
13:03
to the underlying agent lens. Typically
13:05
I'll use three or four. Each
13:07
one of them will have a
13:10
fairly detailed profile of a pen
13:12
portrait, right, a persona, a marketing
13:14
manager or an investor and you
13:16
know the last one might be
13:18
a recent college grad and they
13:20
will iterate and argue between themselves
13:22
about the merits of this particular
13:24
product. with a view to come
13:26
into some kind of distinct consensus.
13:28
And in that virtual focus group
13:30
where I run three or four
13:32
of these, they will go back
13:35
and forth and they will generate...
13:37
tens of thousands of tokens and
13:39
then ultimately the orchestrator will respond
13:41
to me. Now I do that
13:43
to help me sense-check ideas that
13:45
I might want to explore or
13:47
research or to kind of really
13:49
push them or to find a
13:51
very exacting through-line or weakness, set
13:53
of weaknesses in the idea. And
13:55
there are companies that are out
13:58
there electric twin is in the
14:00
UK that is one. that are
14:02
building panels of LLLM's where panels
14:04
of virtual synthetic personas that are
14:06
built within LLLM's where these
14:08
conversations will happen much more
14:10
rapidly and at much larger
14:12
scale in order to do in
14:15
silico what we might have had
14:17
to do quite slowly, you know,
14:19
sort of in vivo and much
14:21
more expensively with real focus groups.
14:23
And I think that those are
14:25
also examples that are beyond, you
14:27
know, AI agents interacting on
14:29
fixed processes and process flows
14:31
where they're much much more
14:33
open-ended and actually the the amount
14:36
of resource that could go into
14:38
those could be quite quite significant.
14:40
So stepping back for a moment,
14:42
I think that I've used as
14:44
part of my writing process for
14:46
many years is to either have
14:48
a formal review from other people
14:50
or more for more usually have
14:52
an informal review where I conjure
14:54
my mental model of a particular
14:56
person in my head. read it and then,
14:58
okay, you know, what would this person
15:00
in the industry think and think of
15:02
this piece right now? Patrick Ellison has
15:04
described using that for his own writing
15:07
process, but I think I stole the
15:09
idea from him. At any rate, the
15:11
one can increasingly do that with like
15:13
telling an LLLM role play as someone
15:15
who has a large corporate on the
15:17
internet, and you don't necessarily need them
15:19
to successfully anticipate more than, you know, 80,
15:21
90% of what the person would say, just as
15:23
a idea generation thing. I did it over
15:25
the weekend on something which was
15:28
extremely professionally significant, extremely enough professionally
15:30
significant, that I was always also
15:32
spending social karma and no small
15:34
amount of money with a number
15:36
of external professional advisors. And the
15:39
fact that the LLLM needs no
15:41
social karma at all and trivial
15:43
amounts of money to step through
15:45
like role-playing as say five different
15:47
potential audiences for a piece was
15:50
revelatory for me in terms of
15:52
the quality, speed of iteration,
15:54
etc., etc. etc. for the
15:56
advice I got. And if, you know, sort of
15:58
early adopters like us are using
16:00
this in production. Like this was
16:03
a this was a real thing
16:05
for me. A lot was riding
16:07
on the line of this, you
16:09
know, not a fun test just
16:11
to try out the new toy.
16:13
If the early adopters are using
16:16
this in production right now for
16:18
this, you can imagine when every
16:21
marketing team in many places
16:23
is running, as you said,
16:25
in silico panels. If I help
16:27
us kind of frame this as well, which
16:30
is that, you know, when Deep Seek was
16:32
released at the end of last year, sort
16:34
of both V3 and R1, and
16:36
then we had the flurry of
16:38
excitement in late January, you know,
16:40
the idea of Jeven's paradox surfaced,
16:42
right, which was Saty Nadella from
16:44
Microsoft said this, and, you know,
16:47
the point about Jeven's paradox was
16:49
that essentially, you know, if you've
16:51
got clog traffic around Austin, and
16:53
you build another freeway within a
16:55
few months you'll have more more
16:57
traffic jams because ultimately there's positive
16:59
elasticity of demand right you reduce the cost
17:01
and and demand goes up you know up until
17:03
a saturation point and one of the
17:05
questions is where is that saturation point
17:07
with with AI and The truth is,
17:10
I think we are nowhere near, and
17:12
by nowhere near I can't even find
17:14
the phrase for the size of the
17:16
fraction of being nowhere near. So much
17:18
of what we do in business and
17:21
often in our personal lives is fundamentally
17:23
gated by the fact that we don't
17:25
have enough time to think. through to
17:27
get to the optimal solution. So we just
17:29
use a heuristic and where we accept that
17:32
being a bit of slack and a bit
17:34
of loss. And essentially we're going to we're
17:36
not going to do that anymore, right? We're
17:38
going to let these AI agents do tons
17:40
and tons of thinking enormous amounts of it,
17:43
things which we, you know, we just did
17:45
out of habit, we will get them to
17:47
to improve, particularly in business. And so I
17:49
don't think there's any end to the
17:51
amount of demand that we could individually
17:54
generate as individuals either as
17:56
people at home or in
17:58
the workplace for thinking to
18:00
be done by these machines. And the second
18:02
lever of that is very, very few people,
18:04
few of us are currently doing that
18:07
and very few organizations are doing that.
18:09
I mean, I think Open AI has
18:11
125,000 people paying for the pro level
18:13
of chat GPT, which is absolutely
18:15
phenomenal. You know, and if you're in a
18:17
job that pays 100K a year or more,
18:19
you should be investing in that straight away
18:22
because your ROI will be incredible.
18:24
And so I think that these two, these two...
18:26
elements, which is one is how
18:28
deep does each personal organization want to
18:30
go and what new organizations emerge, and
18:33
how many of us are participating that
18:35
are both going to expand very, very
18:37
dramatically, and as prices for intelligence
18:39
or per token come down, the
18:41
break-even point will rise and will
18:43
accelerate the demand. And so that
18:45
gets us to the question of
18:47
like, what is the infrastructure that's
18:49
going to serve all of that?
18:51
So we'll get to that infrastructure
18:53
in one second, but to remind
18:55
people of a famous phrase in
18:58
the technology industry, there was a
19:00
point where intelligent people, well steeped
19:02
in the worldwide situation for demand,
19:04
said that there is a worldwide
19:06
market for perhaps five computers. And
19:08
it turns out that we can
19:10
deploy many more than five. We are,
19:12
I think, right now in the five
19:14
computer days of LLLM usage, where we've
19:16
applied it to the obvious applications. and
19:18
people who are only seeing
19:20
the obvious applications fail to appreciate
19:23
what will happen once it gets
19:25
injected into just about everything and
19:27
that. I feel pretty confident
19:29
is going to happen over the
19:32
course of the next 10-20
19:34
years with new developments on
19:36
a week-by-week-by-month basis. We'll continue
19:38
our interview in a moment after a
19:40
moment after a word from our sponsors.
19:42
Even if you think it's a bit
19:45
overhyped, AI is suddenly everywhere. from
19:47
self-driving cars to molecular medicine to business
19:49
efficiency. If it's not in your industry
19:51
yet, it's coming fast. But AI needs
19:54
a lot of speed and computing power.
19:56
So how do you compete without costs
19:58
spiraling out of control? Time to upgrade
20:01
to the next generation of the
20:03
cloud. Oracle Cloud Infrastructure, or OCI.
20:05
OCI is a blazing fast and
20:07
secure platform for your infrastructure, database,
20:10
application development, plus all your AI
20:12
and machine learning workloads. OCI costs
20:14
50% less for compute and 80%
20:17
less for networking. So you're saving
20:19
a pile of money. Thousands of
20:21
businesses have already upgraded to OCI,
20:24
including Vodafone, Thompson Reuters, and Suno
20:26
AI. Right now, Oracle is offering
20:28
to cut your current cloud
20:30
bill in half if
20:33
you move to OCI
20:35
for new US customers
20:37
with minimum financial commitment.
20:39
Offer ends March 31st.
20:42
See if your company
20:44
qualifies for this special
20:46
offer at Oracle.com/Turpentine.
20:50
That's Oracle.com
20:52
slash Turpentine. And I bet I'm not
20:55
alone. In a world of endless scrolling
20:57
and ad blockers, sometimes the most powerful
20:59
way to reach people is in the
21:01
real world. That's where Adquick comes
21:03
in. The easiest way to book out-of-home
21:05
ads, like billboards, vehicle wraps, and airport
21:07
displays, the same way you would order
21:10
an Uber. Started by Insticard
21:12
Alums, Adquick was born from their
21:14
own personal struggles, to book and measure
21:16
ad success. Adquick audience targeting.
21:19
campaign management and analytics.
21:21
Bring the precision and
21:23
efficiency of digital to the
21:26
real world. Ready to get
21:28
your brand the attention it
21:30
deserves. Visit adquick.com today to
21:32
start reaching your customers in
21:35
the real world. Again, that's AD,
21:37
QUICK.com. Let's talk about that
21:39
infrastructure. So. Data centers. We have
21:41
a very diverse listener group for this
21:44
podcast and so some people are intimately
21:46
familiar with walking to a data center,
21:48
some people less so. Let's see first.
21:50
You walk into a data center, look
21:52
to your left, look to your right, what
21:54
do you see? It looks like that scene
21:57
in The Matrix when Neo and Trinity asked
21:59
for guns and you're in a white
22:01
room and just racks and racks of
22:03
guns show up. And that's what you,
22:05
you know, what you see in a
22:07
data center, you see very large numbers
22:09
of racks that have within them, you
22:11
know, pizza box-sized computers stacked up and
22:14
cooling systems at the back in power
22:16
delivery. And what is, you know, how,
22:18
I mean, in a sense, what you
22:20
see today is not too different in
22:23
my view to what we saw 20
22:25
years ago, but in reality. everything
22:27
is much denser, right? The
22:29
networking bandwidth interconnects are running
22:31
100 times faster than they
22:33
used to be. The power
22:35
demand is much, much higher. I
22:37
mean, I think that the most
22:40
recent, most recent sort of H100
22:42
racks, which are the H100 is
22:44
a big big invidious, a GPU
22:46
that people like Meta use, they'll
22:48
be running at 130 kilowatts per rack,
22:50
which is... 70 Kettles is how
22:52
I think about it. You know,
22:54
to put that in the context,
22:57
the first servers that I
22:59
ran to serve websites back in
23:01
1996 ran to 200 watts each,
23:03
and I had four of them.
23:06
So I had 800 watts sitting
23:08
there, roughly, including the sort of
23:11
little Ethernet switch that was
23:13
connecting them to the internet.
23:15
So we've gone from 800
23:17
watts to 130 kilowatts, her
23:19
rack. and that's the power demand
23:21
but of course you then have to
23:24
to cool this as well and you
23:26
have you have sort of storage requirements
23:28
as well and I think roughly
23:31
speaking about 40 to 45% of
23:33
this goes into the actually doing
23:35
the thinking right the the flops
23:37
and the processing about 40% goes
23:39
into the cooling and then the
23:42
rest is you know networking redundancy.
23:44
The cooling often surprises people who aren't
23:46
specialists in this, but it basically comes
23:48
down from, well, data centers don't get
23:50
to cheat the laws of physics. If
23:52
you pump X amount of energy into
23:54
them, that energy has to go somewhere.
23:56
And what you typically need to do
23:58
in most locations. is active cooling to
24:01
remove the energy that you've pumped
24:03
in to the external environment in
24:05
some locations that are very cold at
24:07
certain points of the year you can
24:09
use passive cooling but we're putting data
24:11
centers all over the place. Yeah and so
24:14
active cooling basically means we have to pump
24:16
a liquid in and we have to have
24:18
a heat exchanger somewhere else on the other
24:20
side and you know I think in Colossus
24:23
which is Elon Musk's data center there's a
24:25
fantastic video that shows the size of the
24:27
final cooling pipes and you know that they
24:29
come up to your shoulder, they're pretty
24:31
phenomenal. So one final bit of color
24:33
before we go into the recent developments
24:35
in this, but density drives so much
24:38
of both the economics and the operational
24:40
concerns of data centers and as we've
24:42
heard over the last couple of decades
24:44
they're getting more dense for square centimeter
24:46
cubic centimeter I suppose said because height
24:49
is a material thing we stack these
24:51
boxes on top of each other. Why
24:53
does density matter fundamentally for the operator?
24:55
Because a data center is fundamentally a
24:57
real estate business with some value add
25:00
to being the power cooling and on-site
25:02
technical services. But an example of
25:04
a counterintuitive thing with respect to
25:06
what the density and intensification does
25:08
for you, data centers because they
25:10
have huge amounts of electricity running
25:12
through them operating at high temperatures
25:14
as high as you can do
25:16
without damaging the chips. are not
25:18
the safest places in the world
25:21
to be, particularly during some failure
25:23
modes. And so when I first
25:25
got the badge that would allow
25:27
me into a data center when
25:29
I was working at a Japanese
25:31
system and grader back in the day, I
25:33
had to be given a safety briefing before
25:35
I got the badge, because there is a
25:37
device on the wall called colloquially a big
25:40
red button. And there are many genres
25:42
of big red button in the world. A
25:44
thing you really want your young engineer to
25:46
understand before they walk into that room is,
25:48
is this the big red button that just
25:51
drops all the power in the room? Or
25:53
is this the big red button that you
25:55
have 60 seconds after you press it before
25:57
every living thing in the room dies? Yes.
26:00
the Halon, the Halon gas, right,
26:02
comes out to extinguish an electrical
26:04
fire. That's one of the things
26:06
your on-site safety engineer will be
26:08
really, really interested in making sure
26:11
that everyone taking even a guided
26:13
tour of that room understands. Anyhow.
26:15
So, now you know what it looks like.
26:17
in the data center. So let's take
26:19
a look around the... Can I just
26:22
add to this point, right? So, you
26:24
know, one of the things that you've
26:26
described, which is this, that increasing density,
26:29
also has an impact on the physical
26:31
real estate asset. So many data centers
26:33
that exist today, and, you know, if
26:36
we've driven down freeways in parts of
26:38
the US, you'll have seen these buildings
26:40
that look like warehouses, but are not
26:43
warehouses, turns out... you can't upgrade them
26:45
to these modern AI data centers because
26:47
they actually can't maintain the power delivery
26:49
and the cooling delivery that the new
26:52
chips require. So you know, as the
26:54
chips get more and more dense, they
26:56
get hotter, they need better cooling, they
26:58
need more reliable power, and in fact
27:00
you need... different physical architectures. You physically
27:03
need new buildings as well. And there's
27:05
a kind of unintended consequence, I guess,
27:07
of Moore's Law and Wang's Law and
27:09
whatever else has sort of replaced those
27:11
laws to that make the chips more
27:14
power efficient and kind of more dense for
27:16
flops per cubic centimeter is that
27:18
they need different buildings. Yeah, one
27:20
of the more mind-blowing things in
27:22
my career as a system is
27:24
engineer. Systems engineer build combination software
27:27
and hardware systems. I was definitely more
27:29
on the software side than the hardware
27:31
side. If I never own another server
27:34
in my life, I will be very
27:36
satisfied with that. But there existed a
27:38
data center and the physical amount of
27:41
weight of the server racks was over
27:43
the physical capacity of the floor that
27:45
the server racks were on. And you
27:47
can't simply run a command on your
27:50
terminal to make the floor stronger than
27:52
the architect designed it to be. And
27:54
at the point where you are saying,
27:57
okay. We'd like to replace not just the
27:59
thin metal shell that is on top
28:01
of the floor, but no, actually, we
28:03
need the structural floor replaced. Then you
28:05
start thinking, okay, it might be time
28:07
to build a new building. Right. I
28:09
think that what you just described there
28:12
is one of the things that's most
28:14
misunderstood about the, you know, the nature of
28:16
this particular game, because for, you know,
28:18
the bulk of us, our experience with
28:20
super computers are things that weigh, you
28:23
know, five ounces, right? It's our cell
28:25
phones. and they've always got smaller and
28:27
they've largely got lighter and that's always
28:29
been the way they've been sold to
28:31
us and you know that's true about
28:34
laptops as well and we think about
28:36
our monitors they get smaller and
28:38
smaller and smaller on our desks and
28:40
the in a way the the miniaturization
28:42
the the packing of more transistors onto
28:44
every square centimeter or square inch of
28:47
a dye has the reverse effect on
28:49
physical architecture and it's a really important
28:51
notion and I think it makes it
28:53
really complex when you start to think
28:55
about the you know we tend to depreciate
28:58
buildings over a many many multi-decade period
29:00
you don't depreciate computer hardware over that period
29:02
and it used to be four years now
29:04
Google and you know Amazon or alphabet and
29:06
Amazon have moved that to six years but
29:09
you have this sort of difference in in
29:11
the kind of tenor on the you know
29:13
the financing side and the depreciation side it's
29:15
much much much more complex than just upgrading
29:17
your iPhone your iPhone every three years. We'll
29:20
continue our interview in a moment after
29:22
a word from our sponsors. In an age of
29:24
AI and big data, your personal information has
29:26
become one of the most valuable commodities.
29:28
Data brokers are the silent players in
29:30
this new economy, building detailed profiles of
29:33
your digital life and selling them to
29:35
the highest bidder. But data breaches happening
29:37
frequently, these profiles aren't just fueling spam.
29:40
They're enabling identity theft and putting your
29:42
financial security at risk. But what if
29:44
there's a way you could opt out
29:47
of that system entirely? Incogne is a
29:49
service that ensures your personal data remains
29:51
private and secure. They handle everything from
29:54
sending removal requests to managing any pushback
29:56
from data brokers. What I trust most about
29:58
Incogne is how it provides with ongoing
30:00
protection. Once you're set up,
30:02
it continuously monitors and ensures
30:04
your information stays off the
30:06
market. Want to extend that
30:08
peace of mind to your
30:10
loved ones? Incogney's family and
30:13
friends plan lets you protect
30:15
up to four additional people
30:17
under one subscription. Take control
30:19
of your digital privacy today.
30:21
Use code upstream at the
30:23
link below and get 60%
30:25
of an annual plan. Again, that's
30:27
Incogney.com slash upstream. Eric here. In
30:30
this environment, founders need to become
30:32
profitable faster and do more with
30:34
smaller teams, especially when it comes
30:36
to engineering. That's why Sean Lanahan
30:39
started Squad, a specialized global talent
30:41
firm for top engineers that will
30:43
seamlessly integrate with your org. Squad
30:46
offers rigorously vetted top 1% talent
30:48
that will actually work hard for
30:50
you every day. Their engineers work
30:52
in your time zone, follow your
30:54
processes, and use your tools. Squad
30:56
has front-end engineers excelling in typescript
30:58
and react, and NextJS, ready to
31:00
on-board to your team today. For
31:02
back-end, Squad engineers are experts at
31:05
NoJS, Python, Java, and a range
31:07
of other languages and frameworks. While
31:09
it may cost more than the
31:11
freelancer on Upwork billing you for
31:13
40 hours, but working only two,
31:15
Squad offers premium quality at
31:17
a fraction of the typical
31:19
cost, without the headache of
31:22
assessing for skills and culture
31:24
fit. Increase your velocity without
31:26
amping up burn. Visit choose
31:28
squad.com and mention turpentine to
31:30
skip the wait list. So, both in
31:33
the historical perspective and in
31:35
the near future perspective, where
31:37
did we build data centers
31:40
in the physical universe? Well, we
31:42
started, I mean, the very first
31:44
data centers, I mean, to go
31:46
back to that, tended to be
31:48
in cheap bits of land that were...
31:51
reasonably close to where
31:53
customers were. So in the
31:55
US, it would be in
31:57
Reston, Virginia, as one example.
31:59
because you had the bold baronet in
32:02
Newman which was one of the architects
32:04
of the the the sort of NSF
32:06
net kind of precursor to the commercial
32:08
internet was based over there and you
32:10
know I think that was one of
32:12
the reasons why AOL ended up being
32:14
there in the United Kingdom there was
32:16
a place called Docklands where which was
32:18
very very cheap light industrial land that
32:20
was not too far from the city
32:22
of London where people those banks were
32:24
among the first users of you know
32:26
high-speed cabling so you were really you
32:28
really thought about you know that particular
32:30
dimension which is like relative
32:32
proximity to customers but also
32:34
still being quite you know quite cheap
32:37
for for the land that dynamic
32:39
has has of course continued and
32:41
we know that you know the
32:43
northeast bit of the US is
32:45
really really big for for data
32:47
centers but I think there are
32:49
now considerations around the accessibility to
32:51
fundamentally to electricity, right? Is there
32:53
sufficient electricity for the work that
32:55
we need to get done? Before
32:57
we get into the electricity point,
33:00
some fun color, why do we
33:02
put data centers close to the
33:04
customers? Back in the day, it
33:06
was more about putting them close
33:08
to employees slash skilled technicians. And
33:10
so if your server breaks down,
33:12
you might need your... system administrator
33:14
to drive out from Chicago to
33:16
one of the suburbs to reboot
33:18
it. But say in the late 90s network
33:20
latency was not that huge of a
33:22
consideration because who in the late 90s
33:25
was doing anything where you could tell
33:27
a difference between an 800 millisecond ping
33:29
time and a two second ping time.
33:31
Fast forward to today network latency
33:33
is a primary consideration for where
33:35
these go. And so there are
33:38
worldwide networks of data centers at
33:40
the largest firms in capitalism and also
33:42
out of firms selling to the rest
33:44
of the economy, well the largest firms
33:46
to capitalism also sells to the rest
33:48
of the economy. We'll talk about that
33:50
in a moment to optimize essentially for
33:52
network latency. And then this new power
33:54
constraint is sort of new. The data
33:57
center usage up until recently in
33:59
the United States was probably single
34:01
digit percentage of all the national
34:03
electricity demand. So no small amount,
34:05
but we get no small amount
34:08
of value out of computers. So
34:10
that's fine. But with the densification,
34:12
with the notion of having entire
34:14
buildings full of H100s running on
34:17
training and inference, we start to
34:19
have real constraints about can we
34:21
physically pump as many electrons through
34:23
the grid as we need to call
34:25
back to last week's episode. Anyhow.
34:28
Yeah, but can I can I also put
34:30
some history on this as well because I
34:32
You know AI is this a technology which
34:34
I think is incredibly important
34:37
It's also turned out to
34:39
be very divisive in debates
34:41
both within the industry and
34:43
outside the industry and I
34:45
can't really remember technology But
34:47
you know triggered such a
34:49
split in in people's perspectives. Oh,
34:51
can I can I can I get
34:53
one? Yes, please cloud was a in
34:55
say the 2000 late 2000 to early
34:58
2010s, depending on where exactly you were
35:00
in the world, there were huge debates
35:02
within both the engineering community and in
35:05
the ones who were specifically hands on
35:07
the middle for most of their careers.
35:09
Will big businesses ever consent to use
35:11
somebody else's server and somebody else's building
35:14
for their most private customer data? But
35:16
okay. Yes, there used to be this
35:18
ad by an on-prem company, I forget
35:21
which it was an advert, and
35:23
it said, it's someone else's computer.
35:25
as a way of saying something negative
35:27
and derogatory about the cloud.
35:29
I remember working at a Japanese
35:31
system integrator where we had a
35:33
multi-year debate with the customer base
35:36
which were mostly universities in my
35:38
part and this would have been
35:40
in the late 2000s and the
35:43
universities would say we really really
35:45
want to have All of our student
35:47
information in a location that we
35:50
control where it will be safe,
35:52
not in some building somewhere where
35:54
we have no visibility. And the
35:56
true engineering fact of the matter was
35:58
the location they could was literally an
36:00
unlocked broom closet on campus where anyone
36:03
could walk in under the influence of
36:05
a hangover or similar and walk out
36:07
with all the data. And there was
36:09
a bit of an adoption curve in
36:12
the Japanese enterprise, but the
36:14
somewhat stodier members of the Japanese
36:16
enterprise did get there a few
36:18
years after the American enterprise
36:20
and similar did. But this was a
36:22
live issue back in, you know, as recently
36:25
as 10 years ago. The power issue
36:27
has also been a longer
36:29
term issue than generative AI
36:31
large language models in the
36:33
chat gPT moment. One of
36:35
the reasons I think this
36:37
has become so present in
36:39
people's minds has been that there's
36:42
a lot of skepticism about
36:44
the value that AI brings.
36:46
But before chat gPT in November
36:49
2022 we were all and anyone
36:51
knowing this was going to be
36:53
a thing. We'd already started to
36:55
see places like Singapore which host
36:57
a lot of data centers and
36:59
Ireland and a number of cities
37:02
start to say we can't provision
37:04
any more data centers and those are
37:06
data centers were just for sort
37:08
of pre into pre-A.I. uses like
37:10
just moving customer data becoming a
37:12
digital digital business and if you
37:14
look at the CAPX of a
37:16
firm like I'm just going to
37:18
look at Microsoft for example. In
37:21
2021, to that year, Microsoft was
37:23
going to spend 20 billion dollars
37:25
on CAPX. Only three years earlier,
37:27
it was at 10 billion. So
37:29
it was doubling in three years.
37:31
And this was well before the
37:33
open AI deal had manifested itself.
37:35
It was well before chat GPT.
37:37
So one of the things I
37:39
think we need to also contextualize
37:41
was that even before the before
37:43
AI and before this Gen AI
37:45
thing. data center demand was growing
37:48
really really significantly partly because of
37:50
your the point you made about
37:52
the cloud right companies want customers
37:54
data close to customers and they're
37:57
moving all of it off-prem and you know
37:59
we are now seeing that accelerate, but
38:01
it's not first and foremost in
38:03
my view something that is just
38:05
about, you know, just about AI.
38:07
We've certainly concentrated it, but it
38:09
hasn't, you know, been the sole
38:12
spark for it at all. Yep. And if
38:14
I can give a shout out to Leopold's
38:16
paper here. situational awareness. These are
38:18
the sort of things which were obvious to
38:20
some people back in the day, but they
38:23
were not obvious to, you know, extremely informed
38:25
planners of electricity demand for metropolitan areas and
38:27
nations. They weren't obvious to hedge funds that
38:29
were following the space, etc., etc. They were,
38:32
you know, conversations at dinner parties in San
38:34
Francisco that said, hey, we might need a
38:36
trillion dollars worth of new power built out
38:38
in the next couple of years. Trillion with
38:41
a teeth, that's kind of wild, huh, what
38:43
could we do with that? a trillion of
38:45
anything is a lot, but let's also
38:47
just look at US overall
38:49
electricity demand. I mean electricity
38:51
or energy in general is
38:53
wealth and energy is prosperity and
38:56
energy is health. There are no
38:58
countries with good outcomes for their
39:00
people broadly defined that don't have
39:03
high levels of energy consumption per
39:05
person whether you're efficient about it
39:07
or not. The thing about the US is that
39:10
as of 2020, 2020, 2021, electricity
39:12
usage was pretty much at the
39:14
level of 20 years earlier. Now
39:17
that is a really really important
39:19
thing to look at. Now of
39:21
course you see energy efficiency,
39:23
right? The switch to
39:25
LED light bulbs is
39:27
tremendous. You see environmental
39:30
standards emerging and those...
39:32
making people think much more about
39:34
their energy efficiency. It's also good
39:36
business because electricity costs money and
39:38
you know if you can do
39:40
the same commercial output with less
39:43
sort of cost of inputs that's
39:45
more profit for you. But at the
39:47
same time for it to be flat says
39:49
that there is something about the
39:51
the kind of collective agreement by
39:54
power providers to invest in
39:56
in capacity. And you know this was
39:58
off the back of essential. a doubling
40:00
of electricity consumption between 1975 and about
40:02
the year 2000. So to it suddenly
40:04
going flat. And at that time we
40:06
also started to see the electrification certainly
40:08
of certain types of passenger car transport,
40:10
right? The Tesla's show up and so
40:12
on. So I think that when we
40:14
start to diagnose this we have to
40:16
also go a little bit further back
40:18
and I'm not a historian of the
40:20
US sort of electricity system in great
40:23
detail. you know, it's kind of odd
40:25
that it flat lines 20 years ago
40:27
and we don't start to make, we
40:29
don't start to make those investments, frankly
40:31
either in the US or in many
40:33
parts of Western Europe. I'm also myself
40:35
not exactly an energy economist. I would
40:38
say one thing which probably contributes to
40:40
it was the US has undergone some
40:42
structural economic changes over the course of
40:45
less over decades and there was a
40:47
bit of a substitution between manufacturing output
40:49
in the United States as the size
40:51
that's ever been. manufacturing employment
40:54
is slower, people sometimes confuse those
40:56
two. But we largely shifted from
40:58
a manufacturing focused economy to a
41:00
services focused economy and per dollar
41:03
value of output services, use various
41:05
resources, and inclusive electricity less intensely.
41:07
But I do agree that there
41:10
was a failure to anticipate future
41:12
needs, and also I think in
41:14
the United States in many places
41:16
in Western Europe and many places
41:18
near and dear to the hearts of
41:20
many listeners of this podcast. There's
41:23
been a real reluctance to
41:25
build things in the physical
41:27
world. It's almost like we have
41:29
lost either that will, the knowledge, the
41:31
capacity to do so in
41:33
some places in ways that
41:36
seem absolutely mind-boggling. And when
41:38
we... I live 20 years in Japan and
41:40
Japan has many problems but refusal to
41:42
be able to build buildings is not
41:44
one of them. Right. And then, you
41:46
know, look over the ocean over to
41:49
China. China certainly has not forgotten how
41:51
to, you know, do solar deployments, for
41:53
example. And I think one of the
41:55
most crucial things in this sort of
41:57
moment we find ourselves in is rediscovering.
42:00
the complex system that will
42:02
allow us to actually build
42:04
the infrastructure there are future
42:06
economic needs to depend on.
42:08
Yeah, I mean, I absolutely
42:11
agree with that. I mean, I
42:13
think with China, we see a
42:15
willingness, a desire at sort
42:17
of senior levels of government,
42:19
but also as the sort
42:21
of acceptance amongst people that
42:23
infrastructure is really really valuable
42:26
and it's not just solar manufacturing
42:28
capacity it's also solar solar deployment
42:30
and it's deployment of solar at utility
42:32
scale and on rooftops it's about
42:34
the deployment and build out of
42:36
nuclear power stations very very rapidly
42:38
it's about high-speed rail it's also about
42:40
transmission one of the things that
42:42
of course is really challenging in
42:44
in in the US a lot of which is
42:47
to do with market structure and regulation
42:49
is building transmission lines but you know
42:51
China has 34 ultra-high voltage transmission
42:53
lines that you know very very kind of
42:56
energy efficient and don't leak a lot of
42:58
have a lot of energy loss over those
43:00
long long distances but totaling tens of thousands
43:02
of miles right and one of the things
43:05
that that does especially when you deal with
43:07
intermittent resources like solar and wind is it
43:09
allows you to move the electrons to where
43:11
they need to be you know consumed you
43:14
know if it's sunny in the place and
43:16
they're not being consumed locally you can move
43:18
them to where they're needed. We had discussion
43:20
about this a few weeks ago with Travis
43:23
DeWaltter on the changing needs of
43:25
transmission lines in the United States,
43:27
and if I can elaborate just
43:29
slightly more on what you've said,
43:31
I think one of the most
43:34
important facts of energy economics has
43:36
been the extreme performance of the
43:38
learning curve for solar power versus
43:40
cost over the course of the
43:42
last 25 years. I remember at
43:44
the course. At the time where I
43:47
graduated university about 2004, it
43:49
did not look likely that
43:51
solar was ever going to
43:53
be economical against coal, for
43:55
example, absent huge subsidies
43:57
for social reasons. And it turns
43:59
out... that not only did we
44:01
continue down the cost curve we
44:04
actually bent that curve the learning
44:06
accelerated as there were you know
44:08
multi-billion tens of billions hundreds
44:10
of dollars of investment
44:12
into solar deployment and so
44:15
the vacation of the energy grid is
44:17
one of probably the most central aspects
44:19
of the coming infrastructure wave, but solar is
44:21
not the only power generation thing that is
44:23
going to shake up in the course of
44:26
the next decade or two. You mentioned that
44:28
China has been doing large-scale nuclear bolt-outs, which
44:30
I kind of feel a little bit jealous of,
44:32
but do you want to say a few words
44:34
about the hottest blum-pum, new nuclear technology that we
44:36
might be collocating with data centers in the near
44:39
future? Well, in the near future, let
44:41
us talk about China's electrical capacity.
44:43
They added about 335 gigawatts of
44:45
capacity in 2023, and it was
44:47
29 gigawatts in the US. So
44:49
that's a scale of where we've
44:51
got to. And I think the
44:53
point about the learning curves with
44:55
solar is that they really start actually
44:58
back much, much further back. Back in
45:00
73 or 74, there was a James
45:02
Bond film called The Man of the
45:04
Golden Gun, where this sort of British
45:07
spy has to steal. back solar technology.
45:09
It was so importantly sending the best
45:11
secret service agent in the world to
45:14
get it. And now solar panels are
45:16
so cheap that in Germany they've
45:18
actually fallen below the price of
45:20
fence panel and you're starting to
45:22
see people build out vertical balcony
45:24
fences and fences between them and
45:26
their neighbors, which don't, you know,
45:28
they don't catch as much sunlight,
45:30
but it's cheaper and it generates
45:32
some electricity for you. I think
45:34
a lot is, you know, we're
45:36
hoping for... quite a lot from
45:38
nuclear and in particular a small
45:40
modular nuclear reactor. So the idea
45:42
of a small modular reactor is
45:45
that it's all of those things.
45:47
It's meant to be small and
45:49
it's meant to be modular. What's
45:51
the benefit of that? The benefit
45:53
of that is that you tend to
45:55
see better learning effects when you
45:58
make more of something and so... and
46:00
you get density better learning effects when
46:02
those things are modular, rather than built
46:04
as products, rather than as projects. And
46:06
so one reason why solar has had
46:09
these amazing learning curves is that, is
46:11
that, you know, the panels, whether on
46:13
my rooftop or in a solar field
46:15
in Texas, are essentially the same. And
46:17
of course, the implementation is slightly different,
46:19
because, you know, ones on a roof,
46:22
the others on sort of flatish ground
46:24
with, you know, mounted in particular ways.
46:26
Nuclear reactors have been built. and
46:28
in many cases are sort of n
46:30
of one. You have to start from
46:33
the beginning. A multi-decade bespoke engineering process
46:35
where we get very, very little learning
46:37
between the nth and nth and nth
46:39
plus one iteration of it. Right, absolutely. And
46:41
the idea between the small modular reactor
46:43
is that you can build these things
46:46
in a modular fashion, so you can
46:48
get learning effects. Because they are small,
46:50
you scale out rather than sort of
46:52
by kind of magnifying the scale of
46:54
things. So if you want more capacity,
46:56
you buy more of them. And frankly,
46:58
that's what we've done in the computer
47:00
industry. If you need a super powerful
47:02
computer, you don't go off and get
47:04
a massive mainframe with huge chips. You
47:06
go and get 10 H100s and stick
47:08
them together. you know, a thousand H100
47:10
and sticking together. This also works
47:13
well against the nature of the
47:15
demand for data center electricity because
47:17
for a large scale nuclear power
47:19
plant that would produce enough electricity
47:21
for a large fraction of a city, and
47:23
you don't have full control in sighting
47:25
where that plant is, you probably can't
47:27
justify putting one directly next to the
47:30
newest data center that you popped up
47:32
by the freeway, but for a... a small
47:34
modular nuclear reactor that might fit in
47:36
a footprint that is about the size
47:38
of standard-sized shipping container? Sure, put one
47:40
right next to every data center. Put
47:43
two if you want. Put two if
47:45
you want. Right. And I think the
47:47
other thing about the small modular reactors
47:49
is that if theoretically they are safe
47:52
by physics, by the laws of physics,
47:54
as opposed to safe by layers and
47:56
layers of containment systems and safety
47:58
systems. So in some... sense they are
48:01
much more appealing. I guess the issue
48:03
around the SMR that we have to
48:05
recognize is there's a sort
48:07
of TRL technology readiness level risk
48:09
at least in the West. So
48:12
there are some SMR units
48:14
operational in China and Russia. I'm
48:16
not sure how quickly we're going
48:18
to sort of import them into
48:21
the US or into Europe and
48:23
there are lots of companies who
48:26
are building new designs with reference
48:28
designs and you know, we are hoping to
48:30
see them take off. And, you know,
48:32
in the senses, if there's a
48:34
sort of tailwind of demand and
48:37
capital that's available, you could potentially
48:39
scale these out much, much faster than
48:41
we have scaled out, you know, certainly
48:44
nuclear plants, but I think there is
48:46
a recognition also that you need the
48:48
electricity provisioning. today, which is why, you
48:51
know, we're starting to see gas generation
48:53
on some of these bigger data, you
48:55
know, data centers, whether it's Metas or
48:58
its XAIs or Colossus. So you mentioned
49:00
T.L.R. there. Can you say a few
49:02
more words for the benefit of the
49:05
audience? Oh, TRLs. Yeah, technology readiness level.
49:07
So it's a kind of standard level
49:09
of technology readiness that runs from whether
49:12
something is, you know, really, really, really
49:14
at the high-risk scoping-scoping-scoping we know exactly
49:16
how to build it, how to price
49:18
it, and how to implement it, what
49:21
its kind of total life cycle looks
49:23
like. And, you know, in a sense,
49:25
small modular reactors are sort of
49:27
lower down that scale, probably in, I'm
49:30
guessing, I'm kind of extenprising slightly, but
49:32
certainly in the fours and fives and
49:34
fixties rather than the nines and tens
49:36
and that creates a certain degree of
49:39
risk and uncertainty of what the outcome,
49:41
you know, looks like. There's also,
49:43
of course, a regulatory slash political will
49:45
issue about nuclear reactors where I'm going
49:47
to make a terrible pun, but I
49:50
am a dad. I get to do
49:52
dad jokes. They were politically radioactive for
49:54
a number of years in many Western
49:56
democracies, and I think we are in
49:58
a moment the last. couple of years
50:01
where we can, partly through a
50:03
combination of engineering fact, the new
50:05
technology is simply safer than existing
50:07
technologies, but partly because we had
50:10
good substitutes for base load demand,
50:12
or acceptable substitutes for base low
50:14
demand, liquid natural gas, etc., etc.,
50:17
etc., for many of the last
50:19
couple of decades. And things have
50:21
changed. One is the climate issue,
50:23
of course. We would strongly
50:25
prefer to avoiding using
50:28
combustion of hydrocarbons. and then
50:30
the geopolitics of energy usage have
50:32
changed quite radically over the course of
50:34
the last 10 years or so to a point
50:36
where say much of Europe where
50:39
there's say a relatively extreme level
50:41
of political engagement around environmental issues
50:43
it's like well you can choose
50:46
either fulfilling all of one's preferences
50:48
with respect to domestic constituencies that are
50:50
vociferously anti-nuclear, or you can choose to
50:52
be warm during the winter. And when
50:54
push comes to shove, many of our
50:56
truest and dearest friends over there will
50:59
probably choose to be warm during the
51:01
winter. Yeah, coming back to small modular
51:03
reactors though, you know, you know, Google
51:05
has this deal with chyros energy for,
51:07
I think it's six, maybe it's seven
51:09
small modular reactors, and... I think 2030 is
51:12
a delivery time, so we're talking five
51:14
or six years out. A couple of
51:16
interesting things about this is that given
51:18
that it's seven, it is, we are going
51:20
beyond first for kind, so first of a
51:22
kind tends to be the really expensive one,
51:24
and I did write about this a few
51:27
months ago saying this is kind of a
51:29
Google gift to humanity, because the learning
51:31
curves will be shared, learning experience will
51:33
be shared by all of us. and
51:35
they're the ones who are paying the
51:37
price to bring these things out at
51:39
a high cost. But look at the
51:41
timing, it's six years, and that's only
51:43
seven reactors, and these are any
51:46
small reactors. And so the electricity
51:48
requirements across the US economy are
51:50
really, really enormous, and we have
51:52
to ask how quickly can this
51:55
actually fundamentally scale up. But what
51:57
Google did was they addressed something
51:59
that... this complex system has
52:01
as a roadblock, which is
52:03
that mezzanine financing, which
52:05
is not venture capital level
52:07
risk, but nor is it
52:09
the low-risk guaranteed return of
52:11
asset finance. And that has been an
52:14
issue with a lot of these
52:16
energy and sort of electrification
52:18
hard technologies, which is
52:20
that when you're developing the
52:22
IP, the international property around
52:24
which the equity value in
52:27
the extreme return comes, venture
52:29
capitalists are willing to take
52:31
that risk. But venture capitalists
52:33
are a really small asset
52:35
class. They can't fund infrastructure
52:37
projects. But once they've built the first
52:40
one, you still have risk. You have
52:42
a lot of deployment risk. You have
52:44
all of the learning effects before it
52:46
turns into something that is steady and
52:49
stable, like a solar farm or a
52:51
wind farm where infrastructure and
52:53
investors come in and they ask for
52:56
very very steady state returns with very
52:58
little chance for extreme upside which is
53:00
what the the venture capitalists are after.
53:02
So you have this middle period which
53:05
is kind of ends of a kind
53:07
financing risk which has been really really
53:09
difficult to address. It's known as amongst
53:12
sort of people in climate tech as
53:14
of one of the valleys of death.
53:16
There are many valleys of death and
53:19
you know it's a struggle to cross
53:21
it. fortuitous topic for you to bring
53:23
up because we didn't plan this in
53:26
advance but I've actually spent a good
53:28
portion of my professional cycles the last
53:30
two years volunteering with a focused focused
53:32
research organization that is attempting to popularize
53:35
next generation geothermal and it is exactly
53:37
that problem and there are VCs
53:39
who are willing to write checks
53:42
into the hopefully defensible IP for
53:44
power generation using next generation geothermal
53:46
but every time you do an experiment in
53:48
the field, you need to spend 20 to
53:50
60 million dollars to ask Caliburton to provide
53:52
you professional services and what is the thing
53:54
they're going to do for you, they're going
53:57
to dig a really deep hole and 60
53:59
million dollars a whole. Let's go. As you
54:01
said, that is challenging in VC land.
54:03
In a world where all the
54:05
technology risk has been shaken out,
54:08
there are virtually unlimited amounts of
54:10
capital available to do this. So
54:12
the oil and gas industry in
54:15
the United States, for example, it's
54:17
the same people digging functionally the
54:19
same holes. But can you go to
54:21
a bank or other sources of capital and
54:24
get the marginal gas will financed? Absolutely.
54:26
You know, there are people who do
54:28
that every day, like, give us your
54:30
number, give us your engineer's numbers, I'll
54:32
put in my spray tree, do, do,
54:35
green, we go, you'll have your word
54:37
tomorrow. And so a lot of the
54:39
last two years from me has been
54:41
attempting to cheat the value of death on
54:43
behalf of NGO. Can I ask
54:45
about your experience there that's super
54:48
interesting to me? And what is
54:50
next generation geothermal rather than geothermal?
54:52
Sure. So the brief version is
54:54
that. The geothermal that most people think
54:57
of is places where heat energy from
54:59
the earth bubbles up so close to
55:01
the surface that you can physically perceive
55:03
it in some cases. So hot springs,
55:05
geysers, etc., etc. When you think of the
55:07
places in the world that are the largest
55:10
geothermal energy producers currently, you think
55:12
of places like Iceland. and they
55:14
have been blessed by nature with
55:17
the particular subsurface formations give them
55:19
abundant access to geothermal. Most places
55:22
in the world are not similarly
55:24
blessed by nature. However, due to
55:26
the, particularly the fracking boom in
55:29
the United States, we've gotten much,
55:31
much better at drilling to depths that
55:33
were not economically drillable before. And if
55:35
geothermal can only tap energy that is
55:37
available at the surface of the earth,
55:39
you have to be blessed by nature
55:42
to do it. If, on the other
55:44
hand, you can go down, say, I
55:46
don't know, 6 to 10 kilometers, then
55:48
essentially everywhere is blessed by nature. And
55:50
to 90 plus percent of the continental
55:52
United States is an emir that I've
55:55
heard thrown around. And thus, there is
55:57
still technology risk, you know, digging the hole,
55:59
sure, but... But you need to figure
56:01
out what the generation station is
56:03
that you put on the top
56:06
of the well and what the
56:08
curve looks like for heat in
56:10
the immediate vicinity of the well
56:12
that you have fracked is a
56:14
limited resource. So you tend to
56:16
get a trailing off of the
56:19
generation over a sometimes scale. And
56:21
so we're really looking for those
56:23
next like one to 20, 40, 100
56:25
wells to see what do those curves
56:27
look like? And then it's just a
56:29
numbers game. Like in one in one
56:32
version of fiscal reality, this is
56:34
not cost competitive with other forms
56:36
of heat or electricity generation. And
56:39
in another version of physical reality,
56:41
we have free clean abundant energy available
56:43
in large portions of the world. And
56:46
so ask me in 10 years which
56:48
reality we're living in. I will have
56:50
a very confident answer in 10 years,
56:52
but I don't currently. And what's the
56:54
price that you think is cost competitive?
56:56
the number that I had cashed
56:59
off the top of my head
57:01
because I didn't know we were
57:03
going to be talking it. Interestingly,
57:05
just to say a few more
57:07
words on the why fracking matters,
57:09
fundamentally fracking. The oil and gas
57:11
people love to call it subsurface
57:13
engineering, but we had a prior
57:15
episode about tracking. I will link
57:17
it in the show notes, but drill
57:19
the hole, pump a working liquid down
57:21
the hole, and use that to break
57:24
rock around the vicinity of the vicinity
57:26
of the hole. thing here, sorry folks,
57:28
but in one version of the
57:30
system you pump water or some
57:33
other working liquid which filters into
57:35
the cracks that you've made. And
57:37
because the surface area in
57:39
those cracks is, the cracks look
57:42
fractal in nature and the surface
57:44
area is absurd relative to the
57:46
diameter of the hole. And so
57:48
you can pull heat from the
57:51
surrounding rocks for a very long
57:53
time, hopefully, until the... the rate of
57:55
heat moving into the vicinity of the
57:57
rocks that your water is touching is
58:00
no longer sufficient to sustain the rate
58:02
of heat that you're extracting from the
58:04
top of the hall. And yeah, that's long story
58:06
short for people who want to learn
58:08
more about this field, they'll drop some
58:10
links in the show notes. I mean,
58:12
geothermal is, I think, really interesting and
58:14
potential technology, and it speaks to the fact
58:16
that the energy system feels like it's
58:19
going to continue to be heterogeneous. You
58:21
know, what happened with computing is that...
58:23
we tend to have these sort of
58:25
winner-take halls, although it's a little bit
58:27
more heterogenous than it looks at the
58:29
surface because kind of an arm chip
58:31
is different to an Intel chip is
58:33
different to a GPU. But I do
58:35
see a kind of world of different
58:37
energy technologies. Definitely. And as we heard
58:40
in the episode with Travis Duelter a
58:42
few weeks ago, the heterogeneity of power
58:44
generation makes the grid more stable because
58:46
there are... different physical aspects to
58:48
be different power generation technologies, nuclear
58:50
geothermal, etc. are stable base load
58:52
power, and then solar seems to
58:55
be scalable to the moon, oh
58:57
man, dad joke number two, but
58:59
solar is of course only available
59:01
during particular hours of the day.
59:03
Well, but I think, I think it's
59:05
worth asking a question about how far
59:08
you can actually go with solar, because
59:10
I suspect that it's further than most
59:12
models take it, and just hear my
59:14
case for that. Because solar is highly
59:17
modular, the market expands significantly and that
59:19
means that homes and small businesses as
59:21
well as large scale utility providers can
59:23
get into solar and we've seen this
59:25
happen in large parts of the US
59:28
with community solar but of course rooftop
59:30
solar in China is absolutely enormous and
59:32
Pakistan has a great example where businesses
59:34
got sick and tired of the grid
59:36
failing and so just went off and
59:39
bought loads of solar panels so at
59:41
least 10 hours a day they could
59:43
run their business. the cost curves are
59:45
really really in their in their favor and
59:47
even though we've had 50 years of learning
59:50
it's not clear that solar price decline panel
59:52
press declines are going to stop you know
59:54
sort of any time in the next five
59:56
to ten years and there are new technologies
59:58
bubbling in the wings. So even though it's far
1:00:01
from perfect, the total system cost
1:00:03
is something that's quite dynamic and
1:00:05
the other aspects of the total system
1:00:07
cost will be whatever happens with batteries
1:00:10
and other forms of storage, and batteries
1:00:12
are really early in their learning effects,
1:00:14
I mean the cost of kind of
1:00:16
prestige batteries, right, the lithium-ion battery,
1:00:18
have declined from... $1,200 per kilowatt
1:00:20
hour to about $40 per kilowatt
1:00:22
hour since 2011-2012 to the beginning
1:00:25
of 2025. And there's probably still
1:00:27
some room to run and there
1:00:29
are cheaper technologies with different physical
1:00:31
characteristics like sodium iron air batteries
1:00:33
that are in the wings. And
1:00:35
then you have to think about
1:00:37
how do you manage your distribution
1:00:39
because that becomes an important part
1:00:41
because as you say, right, it's
1:00:44
sunny in one place and it's
1:00:46
not sunny somewhere else. And there are
1:00:48
a couple of really interesting projects that
1:00:50
are going on at the moment. One
1:00:52
is built in Australia to take power
1:00:54
all the way up to Singapore, another
1:00:57
in Morocco, to take power to the
1:00:59
United Kingdom with these subsea high voltage
1:01:01
direct current cables that are being
1:01:03
built out. There's also some other
1:01:05
interesting second, third, maybe even seventh
1:01:07
order effects for some of these
1:01:10
technologies where Casey Hand of our
1:01:12
previous podcast, his company Terraform Industries,
1:01:14
I believe. Yeah, is attempting to
1:01:16
do a direct capture of carbon
1:01:18
dioxide to turn into hydrocarbons using,
1:01:21
quote, alien science, end quote, which
1:01:23
is, of course, heavily energy intense
1:01:25
because you can't cheat the laws of
1:01:27
thermodynamics. Right. But given that you have
1:01:29
huge amounts of solar generation in one
1:01:31
part of a nation, if hypothetically you
1:01:33
can do local generation of hydrocarbons, then
1:01:36
you can ship extremely energy dense hydrocarbons
1:01:38
to wherever you want to put them
1:01:40
in the world, can bust them there,
1:01:43
and then just, you know. the carbon
1:01:45
goes back into the atmosphere, you suck
1:01:47
it right back down and turn into
1:01:50
hydrocarbons again. So Casey is amazing, but
1:01:52
and let's talk about, let's just go
1:01:54
through that cycle again. If we take
1:01:56
carbon dioxide out of the atmosphere and
1:01:59
we push it over... the second or
1:02:01
third thermodynamics hump and we
1:02:03
turn it into methane and we
1:02:05
combust that methane, we're net
1:02:08
zero, right? We've not put
1:02:10
any additional CO2 into the
1:02:12
atmosphere and the energy density
1:02:14
of gasoline or methane or
1:02:17
kerosene is absolutely staggering and we
1:02:19
have a whole load of systems
1:02:21
that already know how to use
1:02:23
that. And I think that's a
1:02:25
great example of why it may
1:02:28
be that. solar could end up being,
1:02:30
and I think it will be the
1:02:32
dominant supply of, you know, first-party energy
1:02:34
and electrons into, you know, into
1:02:37
the system. And there are other
1:02:39
things that you can start to
1:02:41
do with the system, like demand
1:02:43
response. So that's where you kind
1:02:45
of affect and create incentives for
1:02:47
people's behavior to change. There is
1:02:49
a, there are lots of air
1:02:52
conditioners and heat systems, heating and
1:02:54
cooling systems in homes in Texas.
1:02:56
whose behavior is actually managed
1:02:58
by the energy provider to
1:03:00
respond to, to respond to, to,
1:03:03
to, to, by minute, an
1:03:05
hour-by-hour changes in the electricity
1:03:07
demand and pricing. And that
1:03:09
can also extend to how
1:03:11
we might shift workloads for compute around
1:03:13
data centers at different times a day.
1:03:15
Not everything needs to be right on
1:03:17
the front end. You know, your Akamai
1:03:19
servers, serving up live video, need to
1:03:21
be close to the customer the whole
1:03:24
time. I'm not the world expert on
1:03:26
this, but we seem to be sort
1:03:28
of in land grab mode at the
1:03:30
moment with respect to training new AI
1:03:32
models. But one can imagine a future
1:03:34
in which, for those of you who aren't
1:03:36
familiar, these wonderful AI models that we are
1:03:39
using these days, have typically two phases. There
1:03:41
is a training phase and then an inference
1:03:43
phase. The training phase is the months of
1:03:45
hard work that... open AI or anthropic or
1:03:47
another lab put into putting out one of
1:03:49
their new numbered releases of a model. And
1:03:52
then the inference phase is what happens in
1:03:54
the few seconds between when you ask a
1:03:56
question and when an answer comes back to
1:03:58
you. My guess, finger to the wind. without your
1:04:00
amounts of inside knowledge is that
1:04:02
the chips that have been doing
1:04:04
training have been running hot essentially
1:04:06
24 hours a day, seven days
1:04:08
a week for the last while.
1:04:10
However, one can imagine future iterations
1:04:12
of this technology where there is
1:04:14
actually some sort of cost benefits
1:04:16
curve associated with it. And so
1:04:18
at times where times and places
1:04:20
where electricity is particularly expensive, just
1:04:23
stop training for a while. and
1:04:25
continue doing the inference on an
1:04:27
on-to-band basis. Or, again, currently, you
1:04:29
know, we are the dominant public
1:04:31
deployment of AI as with a
1:04:33
user sitting up the keyboard typing
1:04:35
into a computer, but that will
1:04:37
not be the case for forever.
1:04:39
It might make sense to... provision
1:04:41
more intelligence when electricity is cheap
1:04:43
to do these sort of quote-unquote
1:04:45
offline calculations on behalf of industry
1:04:47
than to simply continue running inference
1:04:49
at the same levels everywhere in
1:04:51
the 24-hour clock. Or we could
1:04:53
end up in a world where
1:04:55
cognition is just so stupidly valuable
1:04:57
that why would you ever turn
1:04:59
it off just to save on
1:05:01
electricity bills? Well and what I
1:05:03
love about this is these are
1:05:05
these are a few scenarios and
1:05:07
let's throw out some other scenarios.
1:05:09
One is the algorithmic optimizations. of
1:05:11
the functional level of intelligence that
1:05:13
we want at a given at
1:05:15
a given time. And, you know,
1:05:17
we think about Moore's Law being
1:05:19
this remarkable thing, right? 60% cost
1:05:21
declines on price performance every year
1:05:23
for decades and decades. But software
1:05:25
optimizations can be orders of magnitude
1:05:27
of improvement instantly, like a phosphoria
1:05:29
transform or, you know, doing something
1:05:31
with a bloom filter rather than,
1:05:33
you know, mechanically walking through bloom
1:05:35
filters that brings me back. That
1:05:37
takes you. Yeah, sorry. I'm just
1:05:39
an old dude. I can't I
1:05:41
can't help it. And so this
1:05:43
there's one thing we have to
1:05:45
think about which is which is
1:05:47
like software optimizations. There are also,
1:05:49
you know, novel novel architectures. So
1:05:51
I invested in probably the world's
1:05:53
first reversible computing company. So what
1:05:55
reversible computing does is it has
1:05:58
a different way of processing information
1:06:00
and the reason why Envidia chips
1:06:02
give off so much information is
1:06:04
that they're irreversible. So they increase
1:06:06
entropy and you destroy information and
1:06:08
that is, appears as heat loss.
1:06:10
If you don't do that and
1:06:12
you have reversible processes, you actually
1:06:14
can be a couple of orders
1:06:16
of magnitude in theory, more energy
1:06:18
efficient and it comes at a
1:06:20
kind of certain cost of sort
1:06:22
of complexity for building the gates
1:06:24
up that are required in a
1:06:26
chip. But you know, you could
1:06:28
see, you know, 10, 2050X. improvement
1:06:30
in energy efficiency, which is faster
1:06:32
than the wonderful improvements of energy
1:06:34
efficiency we've seen over the last
1:06:36
30 years in computing since we
1:06:38
started to move towards laptops and
1:06:40
then cellular devices. But none of
1:06:42
that actually, that is all extremely
1:06:44
helpful and the market will surely
1:06:46
move to more energy efficient systems
1:06:48
because electricity will always cost something.
1:06:50
but we still have the specter
1:06:52
of Jevan's paradox, which is I
1:06:54
think your observation, which is we
1:06:56
never know how useful cognition will
1:06:58
be. And as we said earlier
1:07:00
in our discussion, I think we're
1:07:02
barely scratching the surface. You said
1:07:04
we're at the five computers stage
1:07:06
of LLLMs. And so, so. You
1:07:08
know, net net, net, I'm sure
1:07:10
all of this stuff is going
1:07:12
to become orders of magnitude more
1:07:14
efficient, right? Sort of digital IQ
1:07:16
points per watt. We'll get far,
1:07:18
far better than it is today.
1:07:20
And boy, are we going to
1:07:22
demand a lot of it. Points
1:07:24
on the scale. It's, hmm. IQ
1:07:26
is a useful obstruction to, like,
1:07:28
bad around casually about it. I
1:07:30
think we'll probably have more powerful
1:07:33
abstractions in the next while to
1:07:35
describe something that it like, you
1:07:37
know, this OLLM is... extremely limited
1:07:39
with respect to its capabilities, but
1:07:41
doesn't have to be super genius
1:07:43
to successfully route a package from
1:07:45
point A to point B that,
1:07:47
you know, we will have others
1:07:49
that are assisting people in doing
1:07:51
cutting out of scientific research, and
1:07:53
every time there's a new model
1:07:55
released every six years, the laboratory
1:07:57
assistance gets, sorry, six years. Oh,
1:07:59
yeah, exactly. That was a verbal
1:08:01
disfluency, rather than a prediction of
1:08:03
immediate cratering of the learning curve.
1:08:05
The research assistance will be getting
1:08:07
shockingly more capable over very compressed
1:08:09
times fans. Well, I think what
1:08:11
you've described there, though, that ecology
1:08:13
is so important for everyone, for
1:08:15
us to understand. You don't always
1:08:17
need, you know, a... PhD in
1:08:19
negotiation to help you decide how
1:08:21
much to pay for the pair
1:08:23
of socks in Walmart that you're
1:08:25
about to buy. That's the price
1:08:27
you just pay for it and
1:08:29
you're done. And I think that
1:08:31
will also be true for the
1:08:33
way in which we embed intelligence
1:08:35
in our in our system. But
1:08:37
we've only really started to sketch
1:08:39
that I mean if you think
1:08:41
about humanoid robots that you know
1:08:43
they're down to a few thousand
1:08:45
dollars in from unitry in China,
1:08:47
how much intelligence or whatever proxies
1:08:49
for common sense do we do
1:08:51
we want, I would say that
1:08:53
it's got to be at least
1:08:55
a GPT-4 level. I mean you
1:08:57
wouldn't want a robot like that
1:08:59
understanding the world as well as
1:09:01
GPT-2 did, which was random sentence
1:09:03
fragments and then going off anywhere.
1:09:05
And you'd certainly want more controllability.
1:09:07
I'm not saying you could just,
1:09:10
you know, lump one of these
1:09:12
GPT-4 class open source models into
1:09:14
a unitry robot and say, go
1:09:16
and look after my kids. I'm
1:09:18
saying that that's kind of surely
1:09:20
the surely the surely the baseline,
1:09:22
surely the baseline. As you say,
1:09:24
you don't necessarily need it to
1:09:26
make a Einstein-level discovery while it's
1:09:28
loading your dishwasher. And these will
1:09:30
be sub-components of larger engineer systems.
1:09:32
I think people extensively underrate that.
1:09:34
We've had robots for a very
1:09:36
long time. There's many folks that
1:09:38
science fiction officinadoes that can quote
1:09:40
Asimov's three laws of robotics and
1:09:42
talk about a day in which
1:09:44
a robot could kill someone for
1:09:46
the first time. And the... worked
1:09:48
in Central Japan, System Engineer and
1:09:50
Me says, oh, that day actually
1:09:52
happened in the 1970s, an industrial
1:09:54
accident. But, you know, we can
1:09:56
do things in factories, like saying,
1:09:58
okay, it is possible. that a
1:10:00
human factor system might not be
1:10:02
sufficiently intelligent to lock in an
1:10:04
uncontrolled environment right now. So all
1:10:06
right, let's cheat all those assumptions.
1:10:08
One, it won't be human factor,
1:10:10
it will be just a grabbing
1:10:12
arm. Two, we're going to put
1:10:14
it in the middle of a
1:10:16
factory where we control everything around
1:10:18
it and put in some factories
1:10:20
yellow hazard tape describing the physical
1:10:22
like... the physical maximum extent that
1:10:24
the arm can move. And so
1:10:26
you can guarantee the robot system,
1:10:28
the invariant, will never be a
1:10:30
human skull inside this physical hazard
1:10:32
tape during your operation and therefore
1:10:34
you cannot crush anyone like an
1:10:36
egg. And anyhow, and then there's
1:10:38
sort of a, we will not
1:10:40
be in a stable equilibrium as
1:10:42
the software gets smarter, as the
1:10:45
LOMs get smarter as they unlock
1:10:47
additional fun toys for us on
1:10:49
the software side. we'll find new
1:10:51
ways to build hardware to take
1:10:53
advantage of those new capabilities to
1:10:55
build out larger engineered systems to
1:10:57
put the smaller hardware systems in
1:10:59
such that they can produce even
1:11:01
more value at scale. So wild
1:11:03
time to be alive. It's a
1:11:05
wild time to be alive and
1:11:07
what we have to do is
1:11:09
fix our are mental models around
1:11:11
how these technologies emerge. And this
1:11:13
intersection between electricity and computing or
1:11:15
AI is a really, it's bringing
1:11:17
together two very different worlds. So
1:11:19
what's happened in the computer industry
1:11:21
since the, you know, the 1960s
1:11:23
and the 1970s is that we
1:11:25
have tried to bring the computing
1:11:27
closer and closer to the end
1:11:29
user, from the mainframe to the
1:11:31
mini time series, then you know,
1:11:33
personal computers and then, and then,
1:11:35
you know, even smaller smart devices,
1:11:37
I'm wearing a smart ring. on
1:11:39
my hand right now. And we've
1:11:41
also then moved into this a
1:11:43
hybrid environment where there are certain
1:11:45
tasks that I do locally, like
1:11:47
my sleep tracking, and there are
1:11:49
other tasks that I push out
1:11:51
where there's lots and lots of
1:11:53
computers, lots of storage, and I
1:11:55
just need to get them done
1:11:57
in the cloud. The energy system
1:11:59
was never like that. The energy
1:12:01
system was all main frames. There's
1:12:03
three mile island, there's... the Hoover
1:12:05
Dam, there's a huge coal station,
1:12:07
some power station, and then you
1:12:09
pipe that power over and we're
1:12:11
all consumers. And so all of
1:12:13
our mental models around this are
1:12:15
based around those types of ideas.
1:12:17
And I think a really good
1:12:20
analogy for how the system changes
1:12:22
is what happened with telecoms moving
1:12:24
from that world, which is what
1:12:26
the old telephony pre-internet telephony system
1:12:28
look like, to what the new
1:12:30
internet. looks like. We went through
1:12:32
a period of time when the
1:12:34
new internet was sort of highly
1:12:36
decentralized point-to-point, but all the servers
1:12:38
were back in, you know, Reston
1:12:40
Virginia or Telly House in Docklands
1:12:42
or Palo Alto Internet Exchange, and
1:12:44
then we've started to hybridize this.
1:12:46
So quite often, when you access
1:12:48
a resource that is a resource
1:12:50
in another country, that's actually served
1:12:52
quite local to you, perhaps only
1:12:54
20 miles away from a front-end
1:12:56
cashing system like an Akamai edge
1:12:58
server or a cloudfare server. That
1:13:00
is a tiered topology. And I
1:13:02
don't see any reason why that's
1:13:04
not what AI infrastructure ends up
1:13:06
looking like. But the bit that's
1:13:08
a real sort of mind... shock
1:13:10
for people is that's what the
1:13:12
energy system might end up looking
1:13:14
like that supports this with localized
1:13:16
generation and vehicle to grid and
1:13:18
community batteries part of the mix.
1:13:20
Yeah, to analogize directly to the
1:13:22
mainframe, love that analogy, you know,
1:13:24
the large scale nuclear plant as
1:13:26
a mainframe, we had a few
1:13:28
decades of usage of like the
1:13:30
local server room with particularly for
1:13:32
industrial uses co-located perhaps behind the
1:13:34
meter small typically combustion-based electricity generation
1:13:36
and then we've had rooftop solar
1:13:38
the last couple of years I'm
1:13:40
somewhat barefsh on rooftop solar but
1:13:42
that's neither here nor there at
1:13:44
least in I think there are
1:13:46
places in the world where it
1:13:48
makes a lot of sense where
1:13:50
for example grids are less reliable
1:13:52
I think it like California An
1:13:55
interesting rabbit hole for people to
1:13:57
go down. California and Texas made
1:13:59
very different bets with respect to
1:14:01
both loving solar. California thought we
1:14:03
really really want rooftop generation. That
1:14:05
seems extremely incentive compatible. It will
1:14:07
be green, etc., etc. But it
1:14:09
won't spoil our beautiful landscapes. And
1:14:11
Texas said, the desert is free.
1:14:13
We are going all in on
1:14:15
utility scale generation. And that experiment
1:14:17
was run. The results are in.
1:14:19
Texas won by a lot. And
1:14:21
so yeah. Anyhow. So. But there's
1:14:23
chicken in every pot. There might
1:14:25
be a battery in every garage
1:14:27
in the very near future. Certainly,
1:14:29
Elon Musk would love to make
1:14:31
wave a magic wand and make
1:14:33
that happen. And community scale batteries
1:14:35
operating at material scale might very
1:14:37
much be a thing in the
1:14:39
next couple of years. It'll be
1:14:41
wild times. So I feel like
1:14:43
we could continue having this discussion
1:14:45
for a very long time, but
1:14:47
I do want to be respectful
1:14:49
of your time and the audience's
1:14:51
attention as well. Where can people
1:14:53
find you on the internet to
1:14:55
see? The best way to find
1:14:57
me is at exponentialview, which is
1:14:59
exponentialview.com or just put it into
1:15:01
your search engine of choice and
1:15:03
it'll show up and you can
1:15:05
sign up to my newsletter there
1:15:07
and then all of my other
1:15:09
links will sort of leaf off
1:15:11
that. Awesome. Thank you very much
1:15:13
for coming out today and for
1:15:15
the audience, thank you very much
1:15:17
for joining us again. We'll be
1:15:19
back next week. Thank you.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More