Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
You're listening to the Stephen
0:02
Wolfram podcast, an exploration of
0:04
thoughts and ideas from the
0:06
founder and CEO of Wolfram
0:08
Research, creator of Wolfram Alpha
0:10
and the Wolfram Language. In
0:12
this ongoing Q&A series, Stephen
0:14
answers questions from his live
0:16
stream audience about the future
0:18
of science and technology. This
0:20
session was originally broadcast on
0:22
January 31st, 2025. Let's have
0:24
a listen. Hello
0:26
everyone, welcome to another
0:28
episode of Q&A about
0:31
future of science and technology.
0:33
And I see a bunch
0:35
of questions saved up here.
0:37
Okay, here's an interesting
0:40
one from Prometheus. Do
0:42
you imagine humanity exploring
0:44
inner space, i.e.
0:46
virtual worlds, more than
0:49
outer space. It's an interesting
0:51
question. It's In a
0:53
sense, one's comparing the physical
0:55
universe and the extent of
0:57
space in the physical
0:59
universe with the computational
1:02
universe. In my way of
1:04
thinking about it, the computational universe
1:06
is ultimately the rule
1:09
ad, the collection of
1:11
all possible computations one
1:13
can run. And in a
1:15
sense, what one's talking about
1:17
then is the difference between
1:20
exploring physical space and exploring
1:22
rural space. Exploring physical space
1:24
by sending out spacecraft. Exploring
1:26
rural space by looking at... different
1:29
kinds of programs that one can
1:31
study in the computational universe. I
1:33
mean, it's one thing to have
1:35
a virtual world that is emulating
1:37
our physical world that's emulating the
1:39
laws of physics as we currently
1:41
perceive them. It's another to think
1:43
about having a virtual world where
1:45
the laws of physics are up
1:47
the grabs and you can have
1:49
any law you want, any rule
1:51
you want to define how your
1:53
virtual universe should work. I mean, my
1:55
own efforts in studying what
1:57
I now call rheoliology have
1:59
been directed for the last I
2:02
don't know 45 years or something
2:04
to this question of exploring the
2:06
computational universe of possible programs and
2:08
what they do. Some corner of
2:10
that is the programs that are
2:12
the way that we perceive the
2:14
physical universe. But there are other
2:17
programs which in a sense one
2:19
could think of as being the
2:21
ways that aliens could perceive even
2:23
our physical universe that is... that
2:25
are sort of different possibilities, different
2:27
computational possibilities. I think nobody has
2:29
yet quite made the Ruliad video
2:32
game, but it's a really cool
2:34
thing to think about doing. That
2:36
is... a video game where instead
2:38
of being exposed to the laws
2:40
of physics as they are, you're
2:42
exposed to laws of physics as
2:44
they could be. Now my guess
2:47
is, from having personally been exposed
2:49
to those kinds of virtual physics
2:51
as for many decades now, that
2:53
it is one's intuition about how
2:55
things work has to be developed.
2:57
one doesn't immediately have intuition about
2:59
that. One has intuition about the
3:01
physical world as we currently perceive
3:04
it because we've all grown up
3:06
in that physical world, but this
3:08
virtual world of computational possibilities is
3:10
something one just has to get
3:12
used to. My guess is if
3:14
you're sort of really having sensory
3:16
experiences in that world that one
3:19
will end up needing to kind
3:21
of have a bit of an
3:23
on-ramp. from kind of the way
3:25
that we have sensory experiences in
3:27
the current physical world to what's
3:29
possible in the computational universe. You
3:31
know, I have to say this,
3:34
the computational universe is vastly bigger
3:36
than our physical universe. The set
3:38
of all possible rules that one
3:40
can attribute to one's universe is
3:42
much bigger than the set of
3:44
rules that we do attribute to
3:46
our physical universe. There's much more
3:48
to explore and ruleial space. across
3:51
the whole Ruliad than there is
3:53
to explore in physical space even
3:55
though our universe is pretty big
3:57
compared to us. So I think
3:59
it's... It's
4:01
almost for certain
4:04
that there's sort of more
4:06
that one can do in that computational
4:08
universe than in physical space. So
4:10
in a sense, as we
4:13
explore more about that
4:15
computational universe, expanding
4:17
our domain in that computational
4:19
universe is kind of
4:21
like expanding our paradigms for
4:23
understanding how things work. It's
4:26
different from physical space
4:28
where by expanding our domain, we
4:30
might learn a little bit more
4:32
about how the physical universe works
4:34
in places other than right where
4:36
we are, although we think the
4:38
physical universe is fairly homogeneous and
4:40
that the laws of, well, gravity
4:43
looks like it's very homogeneous, but
4:45
many aspects of the universe, it's
4:47
pretty much the same here as
4:49
it is in other places. So
4:52
there's sort of less to learn in a
4:54
sense than there is in the in
4:56
the in the rural universe in in the
4:58
Rulliad, where one can sort of
5:00
look at what happens with very different kinds of
5:02
rules. I have to say
5:04
one thing that I thought
5:07
many years ago now is as
5:09
came out of my thinking about
5:11
this thing I call the principle of
5:13
computational equivalence. The idea that once
5:15
you get above a very low
5:17
threshold, essentially every system
5:19
you look at is equivalent in
5:21
the computational sophistication it shows. So
5:24
for example, that means in particular
5:26
that our brains are equivalent in
5:28
their computational sophistication to many kinds
5:30
of systems, either abstract ones that
5:32
we can make up or physical
5:34
ones out in the world. And
5:37
it's kind of like you think
5:39
about what's the relationship between a rock
5:41
which has all of these electrons
5:43
going around in all those complicated
5:45
patterns and our brain which has
5:47
all of these electrical signals going around
5:49
in all these complicated patterns. Can
5:51
you draw a fundamental distinction between
5:53
the electron activity in a rock and
5:56
the electrical activity in our brains
5:58
and the claim of. the principle
6:00
of computational equivalence is that at the
6:02
level of computational sophistication, you can't
6:04
really make a difference between those two
6:06
things. Now, there are many things
6:08
about what happens in brains that are
6:11
more special to us. Things like
6:13
the fact that we aggregate our experiences
6:15
to believe that we sort of
6:17
have a single thread of experience, even
6:19
though all our neurons are
6:21
sending different signals all the time,
6:23
we aggregate that into the single thread
6:25
of experience. There's no reason to
6:27
think that a rock does the same
6:29
thing. And there are other sort
6:32
of special aspects of the particular arrangement
6:34
of computational things that represents us.
6:36
But when it comes to just sort
6:38
of the rating of who's doing
6:40
the most sophisticated computation, I don't think
6:42
there's a fundamental way in which
6:44
we win relative to the rock. So
6:47
then the question is, in that
6:49
point of view, if you think
6:51
about sort of the ultimate future of humanity, and
6:54
you think about, well, we can take all
6:56
these things that we're very proud of,
6:58
which are our processes of thinking and so
7:00
on, we can in principle upload those
7:02
to some digital form. And then in
7:04
the end, kind of the thing
7:06
I talked about maybe 20 years
7:09
ago now is kind of the potential
7:11
end point is a
7:13
box of a trillion souls, so
7:15
to speak, where you have
7:17
sort of a trillion
7:19
disembodied human consciousnesses in this
7:21
digital medium. And the
7:23
question then is kind of, how do
7:25
you distinguish looking from the outside?
7:27
How do you distinguish the box of
7:29
a trillion souls from the rock? Both
7:32
have lots of electrons going around in
7:34
complicated patterns. What's the difference between
7:36
the box of a trillion souls and
7:38
the rock? And of course,
7:40
the answer that is important to
7:42
us is, well, the box of
7:44
a trillion souls has the details
7:46
of our cultural history and all
7:48
of those kinds of things in
7:50
it. It is a special collection
7:52
of electrons moving around, whereas the
7:54
rock is something not special
7:56
to us. But to
7:58
this question of... What will the box
8:01
of a trillion souls do for
8:03
the rest of eternity, so to
8:06
speak? And in a sense, the
8:08
slightly cynical point of view would
8:10
be the future of humanity
8:12
is a box of a
8:14
trillion souls playing video games
8:16
for the rest of eternity.
8:19
And that seems really bad
8:21
from the point of view
8:23
of what we think is
8:25
important today. it's worth realizing
8:27
that in the course of
8:29
history the things that have
8:31
seemed important to people have changed
8:33
quite a bit. And to one
8:36
of those souls, so to speak,
8:38
in that box, playing quotes
8:40
video games for forever, it's
8:42
that may seem as meaningful as
8:44
anything that we do today. I
8:46
mean, in the past, it might
8:48
not have seemed meaningful to do
8:50
things in social media and so
8:52
on, or things that are very
8:54
virtualized today. One would have said,
8:56
why do you care about those
8:58
things in the past? But today,
9:00
we are embedded in an environment
9:03
where we feel like we have
9:05
a reason to care. Anyway, this
9:07
is all to say that one of
9:09
the things that I talked about maybe
9:11
20 years ago now, was the concept
9:13
of, well, what will those souls do?
9:15
in that, you know, they can, in
9:17
a sense, in that virtualized environment,
9:19
they can achieve anything. They can
9:22
make their bits move around in
9:24
any way they want. They're not
9:26
constrained by the physicality of things
9:28
in our perceived physical world and
9:30
so on. And then the question
9:33
is, well, they can explore kind of
9:35
the things about the physical universe,
9:37
but when you finish doing that,
9:39
where are you going to go?
9:41
Well, you're going to... start potentially
9:43
exploring things in the computational universe
9:45
that are not things realized in
9:48
our physical universe in the way
9:50
that we perceive it at least,
9:52
but they're things that are in principle
9:54
possible. And so I suppose the thing
9:56
that kind of made me pause in a
9:59
sense was the thought. that the
10:01
future of humanity is
10:03
those future instances
10:05
of our consciousness
10:07
exploring the computational
10:09
universe for the
10:11
rest of eternity.
10:13
I personally have spent some part of
10:15
my life exploring the computational universe
10:17
and it felt a little bit like
10:20
if you imagine the future, you
10:22
imagine imprinting your own preferences on the
10:24
future. This is an ultimate version
10:26
of that, of saying that the thing
10:28
that is the inevitable thing for
10:30
people or generalized people to do is
10:32
the thing that I happen to
10:35
have been doing for a large number
10:37
of decades, so to speak. So
10:40
anyway, there are a few thoughts about
10:42
that. The other thing to realize is that
10:45
in just in terms of sort of the
10:47
physical size of things, there's a
10:49
lot more room down than
10:51
there is up. I mean, in
10:54
rough terms, we're about a meter tall,
10:56
give or take. The universe is
10:58
10 to the 26 meters across. The
11:00
physical universe, as we perceive it, is 10
11:02
to the 26 meters across. There's
11:05
a question in our physics project
11:07
and our physics model, how small
11:09
is the elementary length? We're pretty
11:11
sure that space is ultimately discrete, but
11:14
there's a question of what is that
11:16
length scale that is effective at
11:18
the distance between neighboring atoms of space?
11:21
It's not laid out in space
11:23
as it is, that relationship between
11:25
the atoms of space is what
11:27
defines space, but we can still
11:29
translate, we can still say
11:31
what do we perceive? How many sort of
11:33
distances between atoms of space correspond to
11:35
one meter as we perceive it in space
11:37
as we know it right now? We
11:39
don't know the answer to that. It's
11:42
something we'd really like to know
11:44
the answer to. It's something where I'm
11:46
hoping in the next few months,
11:48
actually, to really try and define some
11:50
experimental ways of exploring what is
11:52
the discrete state scale of space. It's
11:55
also my suspicion that we
11:57
probably already know, that
11:59
is, there are already has been an experiment
12:01
done that probably had some weird
12:03
result that nobody understood and probably
12:06
when you untangle what that experiment actually
12:08
did, you'll realize actually that shows what
12:10
the discreteness scale at scale of spaces.
12:12
So it's a weird case a little
12:15
bit like what you're asking about exploring
12:17
outer space versus inner space. It is
12:19
my suspicion that there's a really good
12:22
chance that we'll be able to make
12:24
progress in understanding the experimental implications of
12:26
our physics project without doing any new
12:28
experiment at all. just by saying, well, you
12:31
know, where does what we're talking about
12:33
relate to things that have already been
12:35
studied? Now, it may help a lot.
12:37
It may be a lot clearer if
12:39
we can do a new experiment of
12:41
our own design, so to speak, and
12:43
say, how does this work? But it's
12:45
quite possible that we can explore the
12:47
almost the inner space of what's been
12:49
already studied. But any case, we don't
12:51
know the elementary length, but there's some
12:54
vague reason to think it might be
12:56
around 10 to the minus 100 meters.
12:58
from our size up to the scale
13:01
of the universe is 26 orders of
13:03
magnitude, from our size down to the
13:05
elementary length, is 100 orders of magnitude.
13:07
So there's a lot more room going
13:10
down than there is going up. If
13:12
we're saying, how can we encode a
13:14
lot of computational activity? We could in
13:16
principle encode that in this very microscopic
13:18
stuff more than we could encode it
13:21
by building a computer that's the size
13:23
of a galaxy, for example, much in
13:25
a sense. it should be much easier
13:28
to build that very tiny thing that
13:30
makes use of things that are the
13:32
size of protons or something than to
13:34
build the computer that sort of spans
13:37
the solar system or the galaxy or
13:39
whatever. Now one of the things that's a
13:41
feature of physics is that to probe
13:43
very small length scales you effectively
13:45
need very high energies. That's a
13:48
feature of the uncertainty principle in
13:50
quantum mechanics and so on. So
13:53
there's sort of the question of what
13:55
would it actually take. to build a
13:57
computer that makes use of...
13:59
of things that are happening at
14:01
the scale of things inside a
14:03
proton or something. We certainly
14:05
don't know how to do that
14:08
at this time. And the best
14:10
we can do to sort of
14:12
probe what's inside a proton is
14:14
to bash two protons together at
14:16
very high speed and sort of
14:18
see what happens when they hit
14:20
and see what pieces kind of
14:22
come out when they hit. that's
14:24
kind of like the approach to
14:26
making a clock where you say
14:28
I just take random pieces of
14:30
metal and throw them at each
14:32
other rather than I create something
14:34
with lots of gears and so
14:36
on inside it. So a few
14:38
thoughts on that. I think in the
14:40
nearer term this question of,
14:43
you know, do you want to
14:45
go to the moon or do
14:47
you want to kind of explore
14:49
virtual reality of things that
14:52
might be possible. I mean, I
14:54
think there's just a lot more
14:56
kind of aspirational value to going
14:58
to the moon than there is
15:00
to exploring sort of the inner
15:02
space of what's possible. Personally, I
15:04
have found exploring the inner space
15:06
of what's possible really exciting, but
15:08
I think that's probably a minority
15:10
opinion, at least at this time
15:12
in history, relative to the aspirational
15:14
value of look I can go
15:16
to the thing that I can see far
15:18
away in the sky type thing. But I think
15:20
in... in terms of what one
15:23
can learn from science, there's a
15:25
lot more to explore in rural
15:27
space than there is in physical
15:29
space. And I think it is a
15:31
much, much easier thing to explore
15:33
more broadly in rural space, and
15:36
it's really a worthwhile thing to
15:38
do, and it's what I've spent some
15:40
significant part of my life doing and
15:42
what I've spent a lot of effort
15:44
building the tools to make easy to
15:46
do. It's a lot. From my point
15:48
of view, it's a lot easier to
15:50
build out a giant tower of technology
15:53
and more from language than it would
15:55
be to build a spacecraft to make
15:57
it actually work. At least that's my point
15:59
of view. other people might think
16:01
differently about that. But even though
16:04
the the number of moving parts
16:06
in the kind of large software
16:08
system that is wolf language is
16:10
very very large, actually very large
16:13
compared to anything that you would
16:15
build as a physical object with
16:17
moving parts, but still it seems
16:19
to me a lot easier to
16:21
just you know sit in one's
16:24
chair and type things into one's
16:26
computer and have have CEDUs and
16:28
GPUs and GPUs running things. than
16:30
to be, you know, strapped in
16:33
a rocket and exposed to lots
16:35
of forces that we humans were
16:37
not really evolved for?
16:39
Let's see. As a question
16:41
here from Max, could the
16:43
spin of electrons lead
16:46
to a communication system?
16:48
So, well, there's a whole
16:51
field called spentonics
16:53
that is sort of
16:55
a generalization of electronics.
16:58
Electronics is about moving
17:00
electrons around, having electric currents
17:02
that are flows of electrons,
17:04
or electric potential, which can
17:07
be thought of as sort
17:09
of accumulating electrons in one
17:11
place rather than another. But
17:13
electrons, in addition to being, well,
17:15
what appear to be point particles, I
17:18
don't think they really are point particles,
17:20
but let's say that they have a place,
17:22
and they also have a momentum, but
17:25
they have one other attribute. which is
17:27
they have this thing called spin for
17:29
an electron. And from a large-scale
17:31
point of view, we can think
17:33
about spin as being like, oh,
17:35
the electron is spinning around on
17:37
its axis. That picture of the
17:39
electron spin is very kind of,
17:41
it seems very mechanical. It's only partly
17:44
useful, I think, in terms of the
17:46
intuition about what's really going on. But
17:48
one can say that there is a
17:50
definite direction associated with this attribute that
17:52
we call spin for an electron. what
17:54
it means to have a definite direction
17:57
in a case where we're dealing with
17:59
quantum mechanics. and you have only
18:01
this one moment when you can
18:03
measure, is the spin-up or is
18:05
the spin-up or is the spin-down,
18:07
is the spin-line this way or
18:09
that way, you don't get to
18:12
sort of probe it many times,
18:14
you just get to say, okay,
18:16
I can check, it's this or
18:18
that, and then you measure it.
18:20
and then you've kind of forced
18:22
it to be either this or
18:25
that, whatever you measured it to
18:27
be. And it's not something where
18:29
in the macroscopic world, you can
18:31
just have an object, you can
18:33
say, I'll look at it once,
18:35
I'll look at it twice, I'll
18:38
look at it any number of
18:40
times, and you'll sort of see
18:42
the same thing every time. In
18:44
the sort of quantum world, it
18:46
is by making that measurement from
18:48
the point of view of our
18:51
physics project, you're picking a particular
18:53
location in branch-heel space. You're picking
18:55
one of the possible outcomes of
18:57
what could have happened with that
18:59
electron. You, as the observer, have
19:01
just are at a particular place
19:04
in branch-heel space, and so you're
19:06
seeing the electron in that form
19:08
rather than in a different form.
19:10
But in any case, so there
19:12
is this attribute of spin in
19:14
electrons, and when electrons scatter against
19:17
each other or scatter magnetic magnetic
19:19
field... electric field, magnetic field, whatever,
19:21
it can change the spin. Well,
19:23
this question of whether you can,
19:25
you know, can you communicate using
19:27
spin, I think spin is a
19:30
very quantum mechanical phenomenon. And so
19:32
as soon as you're sort of
19:34
using it to do things, you're
19:36
sort of throwing yourself into quantum
19:38
mechanics. And that might be a
19:40
good thing if you want to
19:43
check that nobody tampered with your
19:45
electron from where it started to
19:47
where it ended up, because if
19:49
they'd tampered they would have made
19:51
this measurement and they would have
19:53
determined things and so on and
19:56
you would have you would be
19:58
able to tell that it had
20:00
been tampered with. But it's also,
20:02
it's the kind of the general
20:04
rules about kind of how you
20:06
think about things don't don't apply
20:09
there. But so I think the
20:11
answer, well, I mean electron spins
20:13
are in the in the... in
20:15
bulk are what lead to things.
20:17
like magnetism, ferromagnetism, a permanent magnet
20:19
is a bunch of, in the
20:22
iron or cobalt and nickel or
20:24
get a linear, or whatever you're
20:26
using for your magnet, the electrons,
20:28
there are some electrons in there
20:30
that have their spins all aligned,
20:32
and that's what leads to this
20:35
overall magnetic field, because associated with
20:37
the spin of electron, is a
20:39
magnetic field that's kind of like
20:41
a little tiny bar magnet that
20:43
generates that magnetic field. So I
20:45
think. The, the, the, yes, you
20:48
can imagine using electron spins and
20:50
the interaction between electron spins to
20:52
sort of carry information necessarily in
20:54
a very quantum mechanical way. Usually,
20:56
cubits, as discussed in quantum computing
20:58
and quantum circuits, one imagines that
21:01
one of the common sort of,
21:03
well, both ways to implement them
21:05
and ways to think about them
21:07
is each cubit is like a
21:09
spin that can be either, let's
21:11
say up or down. Let's see.
21:14
Let's see, Intense is asking, how
21:16
would we evolve to live in
21:18
space? Would we even evolve without
21:20
going into space? You know, evolution
21:22
by natural selection requires that some
21:24
organisms live, some die, that the
21:27
organisms that are more favorable for
21:29
the environment get to have more
21:31
children who survive, and that means
21:33
that more of the genes that
21:35
more of the genes that lead
21:37
to whatever it is that makes
21:40
those children survive, be produced and
21:42
survive, those genes will become more
21:44
prevalent in the population. You know,
21:46
if we have a Mars colony
21:48
or something at some point, and
21:50
there are people who kind of
21:53
grow up on Mars and so
21:55
on, you know, if we are
21:57
sort of, if it's the case
21:59
that, that I mean, there'll be
22:01
all kinds of no doubt strangely
22:03
different medical issues. to do with
22:06
different gravity levels to do with
22:08
well I'll assume one can shield
22:10
the radiation and so on, but
22:12
I wouldn't doubt that we would
22:14
experience things that the kinds of
22:16
issues we know in microgravity in
22:19
space, they're all in orbit around
22:21
the earth, for example, where you're
22:23
constantly sort of falling and you're
22:25
therefore weightless. You have essentially zero
22:27
gravity. You know, the way us
22:29
humans. were not built for zero
22:32
gravity, so to speak, and all
22:34
kinds of things like our muscle
22:36
mass decreases, I think our immune
22:38
system function decreases, all sorts of
22:40
things like that, for sometimes known
22:42
reasons, with the muscle mass pretty
22:45
obvious, you're not, you're not, you
22:47
know, unless you put effort into
22:49
it, you're not exercising your muscles
22:51
in the same way as you
22:53
do supporting yourself against the force
22:55
of gravity, the other ones maybe
22:57
not so obvious, but this question
23:00
of sort of sort of would
23:02
the humans on Mars evolve, Well,
23:04
you know, only if there is
23:06
sort of preferential survival of ones
23:08
that are more Martian suitable than
23:10
others. I think in that's so
23:13
it's kind of a sort of,
23:15
it's always an irony. I mean,
23:17
even Charles Darwin pointed this out
23:19
that, you know, the struggle for
23:21
life as he called it, that
23:23
this thing that is sort of
23:26
things fighting with tooth and claw,
23:28
I think he talked about, from
23:30
that sort of kind of ugly
23:32
war of creatures, so to speak,
23:34
emerges the thing that we think
23:36
is really cool, which is higher
23:39
organisms and humans and all that
23:41
kind of thing. And if we
23:43
don't allow the tooth and claw,
23:45
so to speak, then we don't
23:47
get evolution by natural selection in
23:49
the same way. If we chose
23:52
to modify our genetics and engineer
23:54
our genetics, then yes, we can
23:56
get evolution, although it's a different
23:58
kind of evolution, you know, in...
24:00
in animals we often do breeding
24:02
or plants, we often do breeding
24:05
where we're picking, you know, what
24:07
gets to mate with what, so
24:09
to speak, from the outside, we
24:11
can sort of go more extreme
24:13
and just say, okay, let's just
24:15
pick the genome. We can't do
24:18
that yet, but one suspects that
24:20
eventually one will be able to
24:22
do that. It's a very complicated
24:24
issue and a complicated ethical issue
24:26
of if you can pick the
24:28
genome, you know, you are creating
24:31
a human like... like all of
24:33
us, so to speak, but you
24:35
are creating a human. I mean,
24:37
we have some control over the
24:39
humans we create, so to speak,
24:41
when we choose to have children,
24:44
not to have children, whatever else.
24:46
But by the time you actually
24:48
are deciding, okay, my child is
24:50
going to have this particular sequence
24:52
of base pairs. that's a higher
24:54
level of control than we've understood
24:57
that we could have before and
24:59
it raises all sorts of very
25:01
thorny ethical issues about sort of,
25:03
given that you have that control,
25:05
what do you do with that
25:07
control? In some sense, it's easier,
25:10
from an ethical point of view,
25:12
in some situations, to just sort
25:14
of say, well, you know, nature
25:16
will take its course. And it's
25:18
not like there's a human choice
25:20
being made that we can be
25:23
ethically concerned about. It's just that's
25:25
what nature is going to do.
25:27
Let's see. Well, somebody's noticing that
25:29
I have probably the same shirt
25:31
as I had last time. I
25:33
think it may be a different
25:36
actual shirt, but... I'm not sure
25:38
this is what I'm not in
25:40
my most natural habitat here, so
25:42
that's likely to change. Okay, let's
25:44
see. Athelson asks, do you think
25:46
that the rules of... human biology
25:49
are computationally reducible so that we
25:51
eventually will be able to understand
25:53
the aging of ourselves. You know,
25:55
that's an interesting question. You know,
25:57
is aging fundamentally a feature of
25:59
computational irreducibility? Is it something where
26:02
we have the rules as we
26:04
run them, they will just do
26:06
certain things and we don't get
26:08
to have sort of an overarching
26:10
theory of what's happening? Is something
26:12
like aging a bit like the
26:15
law of entropy increase that well
26:17
the molecules just bouncing around and
26:19
eventually with the way that they
26:21
bounce around makes makes them seem
26:23
random to us or and that's
26:25
sort of all we can say
26:28
or can we have a can
26:30
we make bigger statements about what's
26:32
happening with with molecules bouncing around
26:34
we know that a large scale
26:36
we can talk about things like
26:38
fluid mechanics. where we can kind
26:41
of describe the large scale motions
26:43
of molecules without having to deal
26:45
with sort of the microscopic details
26:47
of that. But I don't think
26:49
we know. It's a very interesting
26:51
question. I've sort of paid attention
26:54
to it for 40 years or
26:56
so of sort of what causes
26:58
aging and then what might we
27:00
do about it? Because we do
27:02
know that if we restart the
27:04
organism from the same DNA... and
27:07
we just let it run again,
27:09
you'll get a nice young organism
27:11
that will lead another life, so
27:13
to speak. So it's not as
27:15
if in the course of our
27:17
life, everything is lost. It's that
27:20
in the course of our life,
27:22
the particular instance of it that
27:24
is us, sort of gradually ages
27:26
and degrades. I mean, it's kind
27:28
of like with the computer, it's
27:30
running its operating system, it, the
27:33
operating system will gradually, things will
27:35
happen to it, and the computer
27:37
will start running slowly, and the
27:39
memory will fill up, and this
27:41
and that. But if we reboot
27:43
the computer, then we, then it's,
27:46
then it's kind of stopped. from
27:48
the same genetic material again, and
27:50
it's good as new, so to
27:52
speak. So, you know, there have
27:54
been a bunch of theories about
27:56
sort of what leads to aging.
27:59
There was one sort of very,
28:01
very clear theory that said, well,
28:03
the end caps on our DNA,
28:05
the telomeres, the repeating sequences at
28:07
the ends of our DNA, you
28:09
know, we have maybe about 50
28:11
of those typically, and maybe after
28:14
we start from our kind of
28:16
initial... initial sort of single cell
28:18
that we start off as, that
28:20
after we replicate that 50 times,
28:22
those telomeres will just all sort
28:24
of fall off and the DNA
28:27
will just untangle and we won't
28:29
be able to replicate the DNA
28:31
any further. Well, that theory, of
28:33
course, like everything in biology, it's
28:35
more complicated than one thinks and
28:37
there are certainly enzymes that add
28:40
telomeres. and that's what happens when
28:42
you go back to the, you
28:44
know, to the fertilized egg cell
28:46
again. It's, it's got its full
28:48
complement of telomeres, courtesy of, I
28:50
guess what is it, telomeres, I
28:53
think. And, you know, then there
28:55
started to be companies that could
28:57
sort of measure your effective age
28:59
by telling you how many telomeres
29:01
you had left in the particular
29:03
sample of your DNA that they
29:06
got. And, you know, it's kind
29:08
of fun to do those. But
29:10
then you realize that, well, actually,
29:12
you can increase the number of
29:14
telomeres by, you know, good diet
29:16
and exercise and things like this.
29:19
And that doesn't seem like a
29:21
theory of aging. It doesn't seem
29:23
like it's like every time the
29:25
cells replicate one telomere falls off.
29:27
So, but then there's the whole
29:29
question of what, you know, so
29:32
that's sort of one partial theory
29:34
of aging. Other theories of aging
29:36
have to do with oxidative damage.
29:38
Essentially, things burn up. Things are.
29:40
in existing cells just sort of
29:42
gradually in an existing cell they
29:45
gradually sort of there's the equivalent
29:47
of combustion, where from metabolism, that
29:49
causes things to molecules to not
29:51
be in the same sort of
29:53
organized form that they originally were.
29:55
I mean, then there are other
29:58
things like progressive genetic damage that
30:00
builds up, although there are proofreading
30:02
enzymes that try to after every
30:04
time our DNA replicates, try to
30:06
make sure that there aren't at
30:08
least... very small changes that got
30:11
made, though those can be corrected
30:13
from what's around that change. But
30:15
so it's still a bit mysterious
30:17
what the real sort of cause
30:19
of aging is. And I think
30:21
it is something where sort of
30:24
the whole computational irreducibility story is
30:26
potentially quite relevant. I tend to
30:28
think that eventually we will have
30:30
more of an overarching theory. of
30:32
biology than we have right now.
30:34
Biology has been rather allergic to
30:37
theory, because a lot of simple
30:39
theories don't work in biology. Biology
30:41
has this sort of meta feature
30:43
that usually if you try and
30:45
explain something in biology, the most
30:47
complicated possible explanation, but there's many
30:50
footnotes and as many special cases,
30:52
will be what's actually going on.
30:54
Whereas in something like physics, it's
30:56
much more likely that the simplest
30:58
explanation will be the right explanation.
31:00
So I think... That's something to
31:03
realize that, but I think it
31:05
is a, you know, I do
31:07
think that sort of the the
31:09
problem of aging is probably solvable.
31:11
I mean, we certainly know what
31:13
we know that you go back
31:16
to the beginning again with another
31:18
generation of organism and the clock
31:20
starts again. It is very likely
31:22
that Well, so there's a question
31:24
of different kinds of organisms, different
31:26
species, have very different aging characteristics.
31:29
One of the things that's... seems
31:31
a little bit disappointing, if true,
31:33
is that we just evolved to
31:35
die after a certain period of
31:37
time because from the point of
31:39
view of the whole species, that
31:42
was a good thing. It might
31:44
not be a good thing from
31:46
our individual point of view, but
31:48
in a sense, if every member
31:50
of your species is immortal, but
31:52
they all get a bit slower
31:55
and so on, eventually it's kind
31:57
of like forget problems of social
31:59
security, you've got the infinite version
32:01
of that, of the young are
32:03
always sort of the young and
32:05
agile are taking care of the
32:08
old and just sort of hanging
32:10
around, or alternatively another thing to
32:12
say is it could be the
32:14
case, and that, you know, the...
32:16
the organism that learns something when
32:18
it's very young, and then it
32:21
lives for a really, really long
32:23
time, it's like, I know everything
32:25
I need to know, I never
32:27
lead to learn anything new. And
32:29
it's kind of like, there can
32:31
be no progress made. And for
32:34
all we know, sort of aging
32:36
is an evolutionary adaptation that is...
32:38
kind of the thing that makes
32:40
progress possible, and that without that
32:42
it would just be like, you
32:44
know, everybody is, I mean, you
32:47
do see this, I have to
32:49
say, but cynically in some organizations
32:51
where the people there are sort
32:53
of just getting older and doing
32:55
the same thing that they always
32:57
did. and it takes the young
33:00
to come in and change things.
33:02
It isn't always the young who
33:04
changed things. That's a bit of
33:06
an illusion, I think. Maybe I'm
33:08
just speaking as an ancient person,
33:10
but it is not my impression
33:13
that sometimes the young say, I'm
33:15
embedded in this environment, this is
33:17
the only way it can be.
33:19
And the older, a bit more
33:21
like, yeah, I kind of know
33:23
that environment. Actually, there's other things
33:26
one could do type thing. But
33:28
any case. So, you know, there's
33:30
this question of is there a
33:32
way. Is there a way. Is
33:34
there a way. Is there a
33:36
way. If it is indeed something
33:38
that evolution has put upon us
33:41
for the benefit of the species,
33:43
if not the benefit of us
33:45
as individuals, or even our current
33:47
conception of the species, then maybe
33:49
there's just a way to flip
33:51
the switch and say, okay. don't
33:54
do that thing that was evolutionarily
33:56
set up to do. You know,
33:58
people have imagined an elixir of
34:00
eternal youth, so to speak, sometimes
34:02
got called the philosopher's stone back
34:04
in the Middle Ages and so
34:07
on. You know, is it conceivable
34:09
that such a thing could exist?
34:11
I think it's conceivable. I mean,
34:13
an example of something that isn't
34:15
that, but sort of almost maybe
34:17
cut might have been that, was
34:20
this question about stem cells. So,
34:22
you know, in... in a human,
34:24
for example, start from one cell,
34:26
it divides, it divides, it keeps
34:28
on dividing. Eventually, it differentiates into
34:30
cells that will be heart muscle
34:33
and brain cells and skin cells
34:35
and so on. And that process
34:37
of differentiation, I think there's like
34:39
12 or 13 levels of differentiation
34:41
in us humans. We don't have
34:43
that many types of cells in
34:46
the end. Lots of copies of
34:48
each individual type. But the... that
34:50
process of differentiation, it was thought
34:52
that the only way you can
34:54
have a pluripotent stem cell, a
34:56
stem cell that can turn into
34:59
any other kind of cell, was
35:01
to sort of restart and have
35:03
the new generation in the fertilized
35:05
egg cell or fetal cells or
35:07
whatever else. But then it was
35:09
discovered, what was it, maybe 15
35:12
years ago or something now, that...
35:14
there was just a way to
35:16
essentially erase the memory of what
35:18
a cell was supposed to be
35:20
and take something like a skin
35:22
cell and reprogram it to be
35:25
a stem cell. It doesn't always
35:27
work perfectly. It's, you know, it's
35:29
a sort of a question of
35:31
does that erasure of memory, does
35:33
it really work or does the
35:35
cell somehow in some corner sort
35:38
of remember that it was a
35:40
skin cell and does the, is
35:42
it's genetics really stable? or when
35:44
you've done that erasure process, does
35:46
that make the genetic somehow start
35:48
splitting out and making tumors and
35:51
things like this? And you might
35:53
have thought, well, let's just revert
35:55
ourselves all to be being, you
35:57
know, lots of cells to be
35:59
being stem cells. That's a really
36:01
bad idea to do in general
36:04
because basically then you just everything
36:06
would turn into a tumor basically
36:08
But the the possibility that
36:10
one could build stem build
36:12
from stem cells one could
36:14
build Cells that can replace
36:17
cells that exist specific kinds
36:19
of cells that exist That's
36:21
a real thing and you know
36:23
big successes recently with
36:26
pancreatic beta cells for
36:28
curing type one diabetes,
36:30
some success I think
36:32
with cardiomyocytes, muscle cells
36:35
for the heart, some successes, although
36:37
not deployed in brain cells and so
36:39
on, I think it's kind of like
36:41
almost the catalog you can say, well
36:43
what kinds of cells do we now
36:46
know how to take from a created
36:48
stem cell? to guide them through this
36:50
pathway of differentiation and give them this
36:52
thing and feed them that and have
36:54
them do this for five days and
36:57
then have them do that for a
36:59
week and this and that and the
37:01
other to sort of chemically trick them
37:03
into deciding that they're going to turn
37:05
into this particular type of brain
37:08
cell. But so, you know, that's an
37:10
example of something which doesn't make it
37:12
as the elixir of eternal youth, so
37:14
to speak. But it's a, you know,
37:16
it's a thing in that direction. I
37:19
mean, another, another thing that people
37:21
talk about is the idea that,
37:23
you know, blood from young organisms
37:25
has stuff in it that helps,
37:27
that is more energizing. And if
37:29
you give the blood from the
37:31
young organism to an old organism,
37:33
it has to, it has to
37:35
match. if you don't want to
37:38
have to suppress the immune system
37:40
and suppress the immune system, causes
37:42
so many other problems. But, you
37:44
know, then that's a mysterious way
37:46
that one is sort of transporting
37:48
youth into the future. And maybe
37:50
that's something where it would be
37:52
possible to identify what the particular
37:54
factors are that say, you know, I'm
37:56
young blood and so to speak and sort
37:59
of activate things. to make the
38:01
organism feel younger, so to
38:03
speak. Let's see. I mean, just
38:06
to say, I mean, I think there
38:08
are different approaches. The
38:10
thing with stem cells
38:12
right now is using them to
38:14
kind of create specific
38:16
organs. You know, if you build
38:19
the scaffold of a lung and
38:21
then you fill it or the
38:23
scaffold of a kidney. and then
38:26
you fill it with the right
38:28
with stem cells and get those
38:30
stem cells to become kidney cells
38:32
or become lung cells, then there's
38:34
a chance that you can create an
38:37
artificial organ that will be if
38:39
you if the stem cells come
38:41
from you, then your immune system
38:43
will be happy with whatever cells
38:45
came out from those stem cells
38:48
that it when in general when
38:50
you have organ transplants and things
38:52
like that the because we all
38:54
have slightly different sort of bar
38:56
codes for our immune system. Our
38:58
immune system is trying to recognize,
39:00
does that thing that just got
39:03
put into me, is that part
39:05
of me or is that an
39:07
alien thing? And it's, if you take,
39:09
you can get a high degree
39:12
of compatibility, but not perfect, just
39:14
because the commonatorics are too big,
39:16
in, you know, unless you're dealing
39:18
with an identical twin, it's not
39:21
going to be the exact same
39:23
genetics. But if you have a
39:25
stem cell that was created from
39:27
you, then it will have the
39:29
exact same genetics as you, and
39:31
so your immune system will be
39:34
perfectly happy with it. And if
39:36
you can have turned those stem
39:38
cells into something that's useful, you
39:40
know, a new, you know, organ
39:42
of some particular type or something
39:45
like that, then, you know, it's a
39:47
whole effort to sort of, you know,
39:49
piece of surgery to connect it in
39:51
and so on, but... that's how that
39:53
can work in terms of sort of
39:55
the broad scale reversal of aging. It's
39:57
just not at all known how to
39:59
do that. It's not, and you know,
40:01
as I say, just saying, revert everything
40:03
to being a stem cell is absolutely
40:05
not the right thing to do. And
40:08
whether there is a sort of
40:10
a broad, you know, this is
40:12
how aging broadly works, and there's
40:14
some switch that you can turn
40:16
back, something you can turn back
40:19
broadly, it's just not known. And
40:21
it's a very interesting question whether
40:23
sort of thinking about things, computation.
40:25
thinking about biology in this kind
40:27
of broad computational way might lead
40:30
one to a way to think
40:32
about that question. I think it's
40:34
very interesting. And as I get
40:36
older, I get more and more
40:39
sort of interested in that
40:41
question. Well, a bunch of questions
40:43
here about AI and LLLMs. Let
40:45
me see what, well, those questions
40:47
here, one from Waffle about
40:50
latest LLLMs, they say, are doing
40:52
very advanced mathematics mathematics. Do
40:54
you think we can get
40:57
AI to the point that
40:59
it's solving open problems and
41:01
creating new mathematics? I've kind
41:04
of written a fair amount about
41:06
that actually. Doing very
41:08
advanced mathematics, yes, they can
41:10
write something that seems like
41:12
a math paper. When you
41:15
start to dissect it, it often
41:17
falls apart. You have to be pretty
41:19
lucky to end up something that
41:21
really follows through. and means the
41:23
right thing at the end. It's,
41:25
you know, my analogy for this,
41:27
what's happening in machine learning is
41:30
kind of like it's building something like
41:32
a stone wall. It's got a
41:34
bunch of rocks that are certain
41:36
shapes that it can sort of
41:38
pluck out of the computational universe
41:40
and it's assembling those to try
41:42
and build up what you ask
41:44
it to build. And that's something
41:46
that works up to a point,
41:48
but it's hard to build a
41:50
skyscraper without of random-shaped rocks and
41:52
so on. the thing that in
41:54
general sort of can you
41:57
build sort of this whole chain
41:59
of of mathematical reasoning and so
42:01
on and expect the skyscraper not
42:03
to fall over, well only if
42:05
you do it in a precise
42:07
formal way. And what we've built with
42:10
Wolfram language is that story of
42:12
being able to do computation for
42:14
mathematics and many other things in
42:16
that sort of precise formal way
42:18
and being able to be able
42:20
to build that tower sort of as
42:23
tall as you want. And I think this
42:25
question of whether... sort of how AI
42:27
helps with that. One thing that actually
42:29
I'm about to start to really trying
42:31
to push on is, well, it read
42:33
the literature. It read a million papers.
42:36
And so it has sort of a
42:38
broad idea, a broad sort of vague
42:40
understanding of sort of how things fit
42:42
together that can be very useful
42:44
to us. You know, I feel like... I have a
42:47
broad understanding of how things fit together,
42:49
but it knows a lot more detail
42:51
than I do, and I think it
42:53
could potentially help in sort of defining,
42:55
if I give it a rough direction,
42:57
being able to fill in a little
43:00
bit more detail so that one can
43:02
know in what direction to go. I think...
43:04
There are questions about sort of, can
43:06
you, if you're proving films in some
43:09
formal way, can you get the AI
43:11
to sort of pick the steps you
43:13
choose? I'm skeptical about that one. I
43:15
think that it is, AIs do well
43:17
at sort of human-like tasks, like
43:19
saying what's a plausible next word
43:21
or next sentence for this essay
43:23
or what's a plausible thing that
43:25
somebody writing this math paper would
43:27
say next. When it comes to
43:30
these much more austere and very
43:32
nonhuman things, like a long automated
43:34
proof. they're really kind of
43:36
lost in that case, at least
43:38
in what I've been able to
43:40
figure out so far. And I
43:43
think the thing to say in
43:45
general is, you know, the current
43:47
generation of sort of chain of
43:49
reasoning AI models of which Deep
43:52
Seek is the is the big
43:54
excitement of the last couple of
43:56
weeks, is what's happening there. What's
43:59
happening there? sort of the thing
44:01
that is interesting and surprising is
44:03
that it seems like the AI is
44:05
kind of planning and then it's going
44:08
through and it's trying various things and
44:10
if something doesn't work it backs up
44:12
and it tries something different and it
44:14
seems like it's really doing a very
44:16
human-like exploration thinking exploration so to speak
44:19
it even has a little tag called
44:21
think that means that the thing that
44:23
it's you know producing now is it's
44:25
in our thoughts so to speak but You
44:27
know, one of the things that's true
44:29
right now is that right now, the
44:31
way the system works is it's making
44:34
a plan and then it's trying to
44:36
execute on that plan and maybe it'll
44:38
go back and forth in parts of
44:41
that plan, but it's sort of already
44:43
made the plan. It isn't kind of
44:45
looking around as it progresses through the
44:47
plan and saying, well, let me... look
44:50
at this piece of computation here and
44:52
change the plan based on what happened
44:54
from that piece of computation. I mean,
44:56
I will say that I think that
44:59
the idea of making sort of a
45:01
chain of reasoning that is many steps
45:03
long, if you can't turn those steps,
45:05
into something hard and computational, it just
45:08
won't work to have a large number
45:10
of steps assembled one after another, because
45:12
as soon as something's gone wrong at
45:14
one step because it's a bit mushy,
45:16
the whole of the rest of the tower
45:19
is going to topple over. And
45:21
something we've been looking at quite
45:23
a bit is using our computational
45:25
language as the thing into which
45:27
to crisp and kind of the
45:29
pure LLLM side of things, so
45:31
that at every step... you're saying,
45:33
I want to, the LLLM, is
45:35
forcing itself to represent what it's
45:37
talking about in a precise computational
45:39
way that can be expressed in
45:41
our language and where we can
45:43
do all kinds of computations from
45:45
it, and more importantly, it's something
45:47
precise where it kind of, there's a
45:49
definite brick that got assembled there that
45:51
you can then go on and continue
45:54
the chain from there knowing that that
45:56
part of the chain is not, is
45:58
not mushy. So I think that that
46:00
That's some, now, you know, in, in, I think
46:02
in the future, sort of this
46:04
interleaving of kind of LLLM,
46:07
neural net type activity with
46:09
computation is really sort of
46:11
the winning combination, but the
46:14
thing that it's been very
46:16
difficult to figure out is
46:18
how to do fine-grained interleaving.
46:20
The typical interleaving that one
46:22
does right now is that the
46:24
LLM will keep going and eventually
46:26
it will decide okay I need
46:28
to call Wolfram language or Wolfram
46:30
Alpha or something and I will
46:32
generate some text together with a
46:34
stop token and it will just
46:37
the text will say okay send
46:39
this stuff to to Wolfram language
46:41
and then the the LLM will stop
46:43
and some external harness will pick that
46:45
up and say okay I see what
46:48
I should send to Wolfram language it
46:50
will go and get a result back.
46:52
It'll then, and then it'll read
46:54
that result, and then it'll treat that
46:56
as part of what was in its
46:58
context from before, and then it
47:01
will keep going from there and
47:03
make a conclusion. I mean, there are
47:05
a couple of technologies. One is
47:07
Rags, retrieval augmented generation, where you're
47:10
saying, basically do a search in
47:12
a collection of documents for things
47:14
that roughly match some particular query.
47:16
that comes into the system, and
47:18
then you're saying, okay, I know
47:21
based on that query and based
47:23
on the search I've done, here
47:25
are 100 things that you, the
47:27
LLLM, might like to think about.
47:29
And that's then very useful
47:31
for what the LLLM produces after
47:34
that. It doesn't have to know
47:36
everything itself. It just got essentially
47:39
prompted with 100 things it might
47:41
think about. And for example, our
47:43
Wolfen notebook assistant that came out
47:45
a month or so ago now,
47:47
is a... It has a lot of
47:50
technology around that kind of thing
47:52
built in where depending on what
47:54
you've asked about it will be
47:56
sort of hinted that it will
47:59
essentially have the documentation and so
48:01
on, and it will get hints
48:03
about what you might want to
48:06
think about this or that. But
48:08
so this notion of retrieval augmented
48:10
generation, where you're retrieving things from
48:12
existing documents, that's one thing. The
48:15
thing that we're really able to
48:17
do a lot with is computational
48:19
augmented generation, where instead of there
48:21
being a fixed thing where you're
48:24
looking up... something that matches a
48:26
fixed document, you're instead saying, I've
48:28
got this thing that I produced
48:30
as an other lamb, now going
48:33
to compute from that, and sort
48:35
of an infinite universe of things
48:37
you could compute, you get back
48:40
the result of the computation, and
48:42
you use that to augment the
48:44
future generation that you make. But
48:46
I think that the emerging story
48:49
will be one of sort of
48:51
computation and AI kind of hand
48:53
in hand in hand. the sort
48:55
of neural net approach with training
48:58
and so on, together with the
49:00
kind of computational approach that I've
49:02
spent, well, the last 45 years
49:04
developing, of representing the world computationally.
49:07
It's different from a programming language.
49:09
A programming language is, you know,
49:11
C, Java, Python, whatever, is about
49:14
kind of representing what goes on
49:16
inside a computer. and letting us
49:18
sort of tell the computer in
49:20
its terms what to do. The
49:23
whole idea of computational language and
49:25
our world language is to represent
49:27
the world computationally, to represent, you
49:29
know, cities and chemicals and graphs
49:32
and images and so on in
49:34
a computational way in sort of
49:36
the way that that so that
49:38
we can represent things in the
49:41
world computationally and manipulate those things
49:43
in some in some precise. sort
49:45
of abstract way, not just writing
49:48
programs so to speak for the
49:50
innards of the computer, but representing
49:52
the world in a precise way.
49:54
And I think that's the thing
49:57
that really opens up the possibility
49:59
of kind of sort of going
50:01
back and forth between the neural
50:03
net that has its kind of
50:06
rough kind of pattern matching based
50:08
way of thinking about the world
50:10
and the sort of formalized way
50:12
of thinking about the world where
50:15
one can sort of build whole
50:17
towers of reasoning and so on.
50:19
Let's see. What's the next step
50:21
for LLLM's to advance? So
50:24
that's an interesting question. I mean,
50:26
I think that, well, there's several
50:28
different things. I mean, there are
50:31
different kinds of training data. There
50:33
are things about, you know, video
50:35
is starting to come online. There'll
50:37
be robotic training data, and that
50:39
will allow one to have kind
50:42
of things that... are for sort
50:44
of modeling what happens with robots
50:46
and probably humanoid robots because that's
50:48
the ones where we're going to
50:50
have sort of as much more
50:53
knowledge than anywhere else about sort
50:55
of what happens in the world
50:57
but the thing that's humanoid shaped.
50:59
I think the that's one kind
51:01
of thing. Another kind of thing
51:04
is well We were all surprised
51:06
by Chat GBT, we were all
51:08
surprised that it turned out to
51:10
be easier than we expected to
51:12
produce fluent essays that kind of
51:15
coherently made sense over quite long,
51:17
long, long stretches. And, you know,
51:19
we kind of thought that human
51:21
language was something more special than
51:23
that. We knew that human language
51:26
had certain grammatical structure. People who
51:28
have been doing computational linguistics since
51:30
the 1960s have known that, you
51:32
know, It's sort of thought about
51:34
language as, you know, noun, verb,
51:37
noun, etc, etc, etc. subject, verb,
51:39
object, in English, all those kinds
51:41
of things. What we sort of
51:43
realize now is in addition to
51:45
the rules of grammar that tell
51:48
us sort of syntactically how to
51:50
put sentences together what part of
51:52
speech follows, what part of speech.
51:54
There's also kind of... the idea
51:56
of the semantic grammar of language,
51:59
what concepts fit with what other
52:01
concepts. And I think, you know,
52:03
my sort of theory of chatGBT
52:05
is that, and LLLMs in general,
52:07
is that what they sort of
52:10
learnt, they inferred from kind of
52:12
large chunks, a trillion words of
52:14
language or something, they were able
52:16
to sort of statistically infer in
52:18
some sense, kind of rules of
52:21
semantic of semantic grammar. that they
52:23
can then use to produce meaningful
52:25
sentences. Now an interesting question that
52:27
arises when you look at things
52:29
like deep seek that seem to
52:32
be sort of emerging where there
52:34
seems to be emergent reasoning happening.
52:36
It's not quite as deep perhaps
52:38
as one might have hoped, but
52:40
there's some emergent way in which
52:43
arguments are being created and executed.
52:45
And the question is, what's the
52:47
sort of formal representation of those
52:49
plans? At some level... when it
52:51
comes to human language, we sort
52:54
of have the idea from grammar,
52:56
syntactic grammar, of what it might
52:58
look like to have a sort
53:00
of formalized structure to represent language.
53:03
There's a question, what does it
53:05
look like to have a formalized
53:07
structure for reasoning? And I'm pretty
53:09
sure there's an easy answer to
53:11
that. And it may be very
53:14
close to programs that we know
53:16
very well from even programming languages,
53:18
let alone computational language. I think
53:20
the... It's kind of in a
53:22
sense logic is a sort of
53:25
base level of kind of a
53:27
structuring for sentences that leads to
53:29
reasoning. It's, and maybe you can
53:31
extend it a bit further than
53:33
that, but I guess the question
53:36
is when you have these much
53:38
larger scale plans, what are the,
53:40
you know, what, what, what, what
53:42
kind of formal thing do you,
53:44
can you use to describe that
53:47
chain of reasoning? And I think
53:49
that will help us to understand,
53:51
you know, just how significant, is
53:53
the fact that this emerges. Maybe
53:55
it tells us something about science
53:58
or something about philosophy, more so...
54:00
than, and maybe we can then
54:02
just take what the LLLM essentially
54:04
discovered for us, and then take
54:06
it and use it in a
54:09
much more formal way, rather than
54:11
having to make the LLLM rediscover
54:13
it every time, so to speak.
54:15
I noticed a question, the comment
54:17
from Taki saying, I should do
54:20
a conversation with Yoshabak. I know
54:22
Josha quite well. I think we've
54:24
done, I think we did a
54:26
conversation on his podcast once, but
54:28
I know him quite well. He's
54:31
been trying to get launched with
54:33
this California Institute for Machine Consciousness,
54:35
which I've been helping a bit
54:37
with. If nothing else, it's a
54:39
good science fiction name. And one
54:42
sort of imagines the, well. It's
54:44
I kind of I kind of
54:46
I kind of imagine it at
54:48
some level as a place of
54:50
experimental philosophy so to speak where
54:53
it's kind of like you've got
54:55
this you know you potentially have
54:57
sort of the artificial brain on
54:59
which you can do philosophical experiments
55:01
philosophy has generally not been an
55:04
experimental science it's been a theoretical
55:06
science where you just sort of
55:08
have to think about things but
55:10
it's something which in modern times
55:12
you know kind of the philosophy
55:15
is something you can imagine kind
55:17
of experimentally exploring with things like
55:19
LLLMs. You know, I had fun
55:21
a few months ago with a
55:23
humanoid robot that had an LLLM
55:26
inside, so to speak. I did
55:28
a live stream, which I'm sure
55:30
you can find on the web
55:32
of a surprisingly long conversation with
55:34
a humanoid robot sort of powered
55:37
by an LLLM, which was kind
55:39
of... to me sort of viscerally
55:41
philosophically interesting in the sense that
55:43
it really was a strange feeling
55:45
to talk to this humanoid thing
55:48
with eye motions that were fairly
55:50
realistic, although I think it stared
55:52
a lot more than humans stared.
55:54
which got me very, kind of,
55:56
was very disquieting after a while.
55:59
But it was sort of an
56:01
interesting experience in kind of experimental
56:03
philosophy, so to speak. Memes asks,
56:05
have any LLLM agents been trained
56:08
on my big book, New kind
56:10
of science, or maybe on the
56:12
science, that's in New kind of
56:14
science? We have tried to train
56:16
some LLLMs on that. It's not
56:19
terribly successful so far, because... In
56:21
a sense, well, it's not been
56:23
successful in doing the thing that
56:25
is really a big reach, which
56:27
is to sort of break computational
56:30
irreducibility, to be able to say,
56:32
given just that you feed the
56:34
rules in, have the AI jump
56:36
ahead and say what will happen.
56:38
We don't think that's theoretically possible
56:41
in general. But there is a
56:43
question of sort of the things
56:45
of the sort of pockets of
56:47
reducibility that exist, to what extent
56:49
can those be found by an
56:52
AI? And if you feed it
56:54
enough different examples of different kinds
56:56
of things, where we humans notice
56:58
that, oh, there are some regularities,
57:00
like, for example, back 40 years
57:03
ago now, I kind of noticed
57:05
that there were four basic classes
57:07
of behavior in these things called
57:09
cellular automata, these simple rules, computational
57:11
rules that you can run, there
57:14
were four basic classes of behavior
57:16
that you could kind of busially
57:18
identify. And that's something that I
57:20
would think. is perfectly accessible to
57:22
kind of neural net investigation. So
57:25
it's an interesting question, kind of
57:27
what, if you feed the LLLMs
57:29
or AIs enough kind of NKS
57:31
material, do they start kind of
57:33
inventing a language about it? Do
57:36
they start having kind of effectively
57:38
having discovered a description? It will
57:40
be very alien to us, perhaps
57:42
less alien to me than to
57:44
anybody else, but still very alien
57:47
even to me, even to me,
57:49
even to me. if one starts
57:51
sort of being able to reason
57:53
in terms of these structures that
57:55
exist in this very abstract space.
58:00
Let's see. Well, there's questions here about
58:02
reinforcement learning. Justin is asking about external
58:04
reasoning paradigm, more generally reinforcement learning in
58:06
LLLams. Yeah, I mean, just to explain
58:09
a little bit about that, in LLLam,
58:11
kind of what you're trying to do
58:13
is to make this whole neural net
58:15
that's going to that's going to take
58:18
kind of a representation of a piece
58:20
of text up to some point. And
58:22
it's then going to predict, how does
58:24
that piece of text go on? What's
58:26
the probability that the next word is
58:29
the versus a versus cat versus dog?
58:31
And the way it's trained to do
58:33
that is you've got a whole piece
58:35
of text, and you basically cover up
58:38
the end of the text, and you
58:40
say, OK, I want to tweak this
58:42
neural on that, so that it will
58:44
correctly reproduce what was there if I
58:46
take off the cover. And that's kind
58:49
of the incremental approach to learning. that
58:51
it's both incremental in the way that
58:53
you tweak the neural net and it's
58:55
also it's kind of it's it's learning
58:58
to produce a token at a time
59:00
as it produces that that stream of
59:02
text and typical LLMs you just explicitly
59:04
see them writing out one word at
59:06
a time. That's not just done for
59:09
effect. It's done, sometimes it is a
59:11
little bit done for effect, but a
59:13
large part of that is just, it
59:15
is generating those words sequentially. And just
59:18
like we do probably, you know, I'm
59:20
not sure that I have a plan
59:22
for my next word is going to
59:24
be, it's just that the mechanics of
59:26
my neural nets successfully produces the next
59:29
word. So is it with an L11.
59:31
Okay, so the idea of the enforcement
59:33
learning is not... is to do something
59:35
a bit more global, to say, okay,
59:38
let's look at the whole thing that
59:40
came out, the whole essay that came
59:42
out, the whole answer to that question,
59:44
the whole math problem it did, whatever
59:46
else, and say, is that right or
59:49
wrong? And then say, okay, given that
59:51
that was right or wrong, how does
59:53
that feedback to tweaking? this neural net.
59:55
It's much easier to tweak a neural
59:58
net if you say, well, just made
1:00:00
a small mistake here. I can sort
1:00:02
of propagate back that error and tweak
1:00:04
the weights and the neural net, the
1:00:06
numbers and the neural net to make
1:00:09
it a bit closer to having got
1:00:11
the right answer instead of the wrong
1:00:13
answer there. When you're looking at the
1:00:15
sort of the whole thing, it's more
1:00:17
difficult, it's more complicated, to decide how
1:00:20
you make changes the neural net. to
1:00:22
make it get closer to the thing
1:00:24
you want to be the answer, so
1:00:26
to speak. And it's also the case
1:00:29
that you can expect the neural net
1:00:31
to kind of surface its own questions
1:00:33
to be asking, and then if you
1:00:35
have an outside arbiter and outside critic
1:00:37
of what's happening, and outside, you know,
1:00:40
you're running it against Wolfen language or
1:00:42
something, and it's, Wolfen language is telling
1:00:44
you, that answer is wrong, and then
1:00:46
you can go and sort of feed
1:00:49
that back. and make changes so you
1:00:51
won't get that same wrong answer the
1:00:53
next time. It's sort of the idea
1:00:55
of reinforcement learning as opposed to traditional
1:00:57
LLLM training is that in traditional LLLM
1:01:00
training it's kind of like you're trying
1:01:02
to get a token at a time
1:01:04
and you're sort of training for that,
1:01:06
whereas reinforcement learning you're really training for
1:01:09
whole answers so to speak. And there's
1:01:11
a bit more arbitrariness in how that's
1:01:13
done and there's sort of gradual engineering
1:01:15
advances in how to make that work.
1:01:17
I mean, I think the thing to
1:01:20
realize is that there's another level which
1:01:22
is kind of the harness inside of
1:01:24
which you're operating the AI and doing
1:01:26
things like falling out to computational tools
1:01:29
or whatever else. That's something that happens
1:01:31
kind of at a higher level of
1:01:33
the thing that is sort of watching
1:01:35
what the AI is doing and is
1:01:37
noticing that the AI is saying, hey,
1:01:40
I want to ask, you know, Wolfram
1:01:42
Al for some question or something like
1:01:44
that, and then it's getting that question
1:01:46
picked up, answered, and fed back to
1:01:49
the LLL. As I said, I think
1:01:51
the ultimate future is a much more
1:01:53
fine-grained sort of arrangement of LLLM-type operations
1:01:55
with computational operations, but we don't yet
1:01:57
really know how to do that. There's
1:02:02
a question here from butchering asking
1:02:04
how many years away do you
1:02:06
think we are from gray goo?
1:02:08
Self-replicating nano machines so Was a
1:02:11
scenario from what must have been
1:02:13
the 1990s? People were talked about
1:02:15
before before any worried about the
1:02:18
AIs actually they were already worrying
1:02:20
about the AIs taking over now
1:02:22
that I think about it people
1:02:24
have been worrying about the AIs
1:02:27
taking over since long before can
1:02:29
actually from pretty much about the
1:02:31
time electronic computers started to be
1:02:34
to be deployed. But another kind
1:02:36
of evolution goes past us, the
1:02:38
world passes us by, beyond the
1:02:40
AIs takeover, is a much more
1:02:43
extreme version of that, which is
1:02:45
self-replicating nano machines takeover. Right now,
1:02:47
if you say, sort of how
1:02:49
do you make a machine? that
1:02:52
has components of a molecular scale?
1:02:54
The answer is we, biology, is
1:02:56
sort of the best example we
1:02:59
know of a molecular scale orchestrated
1:03:01
machine where we've got all these
1:03:03
little pieces that fit together in
1:03:05
this very precise molecular way and
1:03:08
one molecule does this and interacts
1:03:10
with another molecule and so on.
1:03:12
We have a bunch of our
1:03:14
operation is at a molecular scale.
1:03:17
There are much less... much sort
1:03:19
of simpler molecular scale things that
1:03:21
can happen like when a crystal
1:03:24
grows it's arranging molecules in this
1:03:26
very precise array and that it's
1:03:28
sort of a molecular scale molecular
1:03:30
scale precision but it only produces
1:03:33
this repeating crystal let's say I
1:03:35
kind of am thinking about crystals
1:03:37
that that have more computation even
1:03:39
than the way that they're formed
1:03:42
but that's a that's a quite
1:03:44
different footnote to the story. But
1:03:46
the main thing is that. Right
1:03:49
now, we are the only self-replicating
1:03:51
molecular scale things that exist. But
1:03:53
you could imagine that we could
1:03:55
simplify the whole process of self-replication.
1:03:58
and we can end up with
1:04:00
just a molecule that just, you
1:04:02
know, or, you know, two or
1:04:04
three molecules that forget all this
1:04:07
stuff with, you know, metabolism and
1:04:09
RNA and cell membranes, all these
1:04:11
kinds of things, and water and
1:04:14
everything, and it could just be
1:04:16
this arrangement of two or three
1:04:18
molecules that have the feature that
1:04:20
those molecules will just pick up
1:04:23
atoms from everything else and just
1:04:25
keep... you know, making more and more
1:04:27
of those atoms. It's kind of like
1:04:29
a polymer, like, you know, plastics or
1:04:32
polymers, where they have, you know, hydrocarbon
1:04:34
pieces that just, you're just adding on
1:04:36
longer and longer and longer chain to
1:04:38
make that polymer molecule. And the question
1:04:40
would be, could you make kind of
1:04:43
a thing that is a more complicated
1:04:45
thing that would sort of polymerize the
1:04:47
whole world? where you would turn
1:04:49
everything in the world, all the
1:04:51
atoms in the world, all the
1:04:53
carbon and silicon and oxygen and
1:04:56
so on that exists in the
1:04:58
world, could you have this little
1:05:00
nano machine that would just, it
1:05:02
could be just a few molecules
1:05:05
that would somehow just sort of
1:05:07
eat up the world and turn
1:05:09
it into itself. Well, we don't know
1:05:11
even close to how to do that.
1:05:13
I mean, when you see, for
1:05:15
example, You know, some crystals,
1:05:17
if you have a super
1:05:19
saturated solution of some particular
1:05:22
material and you suddenly cool
1:05:24
it down, a crystal will
1:05:26
very rapidly form. You know,
1:05:28
you can have situations where it's
1:05:30
just immediately, I don't know, a
1:05:32
nice one that does this is
1:05:34
bismuth, which melts at some very
1:05:37
modest temperature. You can easily melt
1:05:39
it on a regular stove. And
1:05:41
when you let it cool down
1:05:43
quickly, it immediately forms into these
1:05:45
very lovely. kind of square, it
1:05:47
was a very nice looking stuff,
1:05:50
that forms into these little square
1:05:52
pieces and so on, and that
1:05:54
happens very quickly. It kind of
1:05:56
forms that spontaneously just by the way
1:05:58
the atoms are bismuth. fit together. And
1:06:01
so the question would be, could
1:06:03
something like that happen? Could you
1:06:05
have sort of a gray goo?
1:06:07
Its very name of gray goo
1:06:09
kind of says, well, it's all
1:06:11
over if the world starts getting
1:06:13
eaten by the sort of polymerizing
1:06:15
kind of critter that's just replicating
1:06:17
copies of itself. You know, We're
1:06:19
not close to that. In fact,
1:06:21
the whole enterprise of nanotechnology that
1:06:23
was pretty popular in the late
1:06:25
80s, early 1990s, unfortunately, came upon
1:06:27
hard times. I mean, the idea
1:06:29
was, can one take... sort of
1:06:31
machinery that we know exists at
1:06:34
a large scale with clockwork and
1:06:36
levers and all these kinds of
1:06:38
things and can one shrink it
1:06:40
down to a molecular scale and
1:06:42
quite a lot was figured out
1:06:44
about how to do that and
1:06:46
a lot of different sort of
1:06:48
mechanical engineering issues about you know
1:06:50
how do you lubricate something that's
1:06:52
the size of a few molecules
1:06:54
those kinds of questions and and
1:06:56
how much do things just sort
1:06:58
of stick together and how do
1:07:00
you how do you deal with
1:07:02
that. kind of thing. Well a
1:07:04
fair amount was figured out about
1:07:07
that and there was some actually
1:07:09
there was a in the US
1:07:11
there was some government initiatives of
1:07:13
you know let's fund nanotechnology and
1:07:15
really make that a big thing.
1:07:17
Unfortunately I mean what really happened
1:07:19
maybe I'm being a bit too
1:07:21
cynical here is that there was
1:07:23
sort of an interscientist kind of
1:07:25
rivalry between folks like chemists and
1:07:27
material scientists who after all have
1:07:29
been dealing with molecules in their
1:07:31
ways for a long time and
1:07:33
the nanotechnology crowd that was dealing
1:07:35
with molecules their way and it
1:07:38
kind of it and in the
1:07:40
end it seems like what happened
1:07:42
was the nanotechnology direction just sort
1:07:44
of got stomped on by these
1:07:46
existing fields and it really hasn't
1:07:48
been pursued. My own feeling is
1:07:50
that there's a lot of promise
1:07:52
there. I think one thing that's
1:07:54
probably not correct is the idea
1:07:56
of let's take machinery that we
1:07:58
know how it operates on the
1:08:00
scale of centimetres and let's shrink
1:08:02
it down to the scale of
1:08:04
nanometers and have it. be the
1:08:06
same kind of machinery, but just
1:08:08
on a much smaller scale, my
1:08:11
guess is that's not really the
1:08:13
right way to do it. A
1:08:15
better way to do it is
1:08:17
to think about sort of how
1:08:19
do you take the components that
1:08:21
exist of molecules we have and
1:08:23
how do you assemble them to
1:08:25
sort of compile up to a
1:08:27
thing that's useful to us. It's
1:08:29
kind of like saying, well, you
1:08:31
know, we could, if we wanted
1:08:33
to arithmetic arithmetic. we could do
1:08:35
what people like Charles Babbage did
1:08:37
when they made mechanical computers back
1:08:39
in the early 1800s and they
1:08:41
had, you know, a wheel that
1:08:44
had, you know, digit zero through
1:08:46
nine and they had cogs that
1:08:48
connected that to carry bits and
1:08:50
so on. Well, turns out... that
1:08:52
just having a bunch of nand
1:08:54
gates in a microprocessor and arranging
1:08:56
them in the right way, you
1:08:58
can achieve the same thing. You
1:09:00
don't need to build in the
1:09:02
structure of decimal arithmetic with carry
1:09:04
bits and so on to the
1:09:06
generic computer. And my guess is
1:09:08
the same is true with molecular
1:09:10
computation that one can start with
1:09:12
very much more mundane components and
1:09:15
essentially purely in software in effect.
1:09:17
build up to something that is
1:09:19
practically useful. And I think in
1:09:21
a sense that's what chemistry has
1:09:23
been doing forever in chemical synthesis,
1:09:25
synthetic chemistry, and so on, but
1:09:27
chemistry is much less orchestrated than
1:09:29
one imagines molecular nanotechnology to be
1:09:31
in chemistry. the sort of the
1:09:33
main thing is well how are
1:09:35
you going to get molecules to
1:09:37
interact well well liquids are a
1:09:39
really good case because there are
1:09:41
lots of molecules bouncing around but
1:09:43
they bounce around so if two
1:09:45
molecules are going to sort of
1:09:48
fit into each other they'll probably
1:09:50
find each other by in the
1:09:52
liquid and they'll stick and but
1:09:54
they'll go on and find other
1:09:56
molecules if they don't stick whereas
1:09:58
in a solid the molecules are
1:10:00
nice and close together but they
1:10:02
don't move around in the gas
1:10:04
that just aren't enough molecules but
1:10:06
in liquid. you have all this
1:10:08
sort of randomness of motion of
1:10:10
things jiggling around, it's not like
1:10:12
what seems to happen in biology,
1:10:14
where it seems like there's much
1:10:16
more... orchestration of this molecule is
1:10:18
actively moved by this chain of
1:10:21
other molecules to go to this
1:10:23
place and so on. And I
1:10:25
think that's the thing that one
1:10:27
sort of imagines could be engineered
1:10:29
in nanotechnology is this kind of
1:10:31
orchestrated arrangement of molecules. And I
1:10:33
think there's a lot of wonderful
1:10:35
things that can be done with
1:10:37
that. I think it hasn't been
1:10:39
investigated as much as it could
1:10:41
be. In fact, it makes me
1:10:43
wonder whether there are sort of
1:10:45
machine learning type plays that could
1:10:47
be made. sort of both mining
1:10:49
the existing literature and knowing the
1:10:52
certain amounts about chemistry, so to
1:10:54
speak, that would allow you to
1:10:56
make progress there. I mean, in
1:10:58
a sense, people have tried to
1:11:00
do this with protein engineering and
1:11:02
sort of the hope of text
1:11:04
to protein, so to speak. You
1:11:06
just say what you want the
1:11:08
protein to do, you know, you
1:11:10
say, I want to make a
1:11:12
cage for a molybdenum atom or
1:11:14
something, and then a protein will
1:11:16
get specified that will curl itself
1:11:18
up and make a cage just
1:11:20
the right size for a molybdenum
1:11:22
ion or something. But, you know,
1:11:25
I think, I think, I think
1:11:27
there's sort of great promise in
1:11:29
this sort of molecular scale computation,
1:11:31
molecular scale activity, where the molecules
1:11:33
are carefully arranged to what they
1:11:35
do, much like in life, but
1:11:37
without the immense baggage of life,
1:11:39
that there are probably much simpler
1:11:41
mechanisms that can be set up
1:11:43
for just doing the nanotechnology part
1:11:45
without having the whole organism that's
1:11:47
eating things and so on. It's
1:11:49
just not something that's been explored
1:11:51
very much. It was explored at
1:11:53
the end of the 80s, beginning
1:11:55
of the 90s, and it really
1:11:58
really stopped being explored. And I
1:12:00
suppose I suppose protein engineering, which
1:12:02
is a recent thing that's that's
1:12:04
been discussed in context to machine
1:12:06
learning, there's a little bit an
1:12:08
outgrowth of that, although I think
1:12:10
that proteins are quite floppy and
1:12:12
quite, it's not obvious that proteins
1:12:14
are the right way to do
1:12:16
it. I mean, it's like saying,
1:12:18
let's build a giant sort of
1:12:20
transportation ecosystem. But it's all going
1:12:22
to be based on horses. because
1:12:24
horses succeed in moving from here
1:12:26
to there. And how could you
1:12:28
imagine doing something different? Well, it
1:12:31
turns out we did in our
1:12:33
civilization have a transportation ecosystem based
1:12:35
on horses, but turns out cars
1:12:37
were a better idea in pretty
1:12:39
much every way I can think
1:12:41
of. But it's some. So I
1:12:43
think it's the same thing with
1:12:45
nanotechnology. We can kind of try
1:12:47
and piggyback. on biology, scientific papers.
1:12:49
And... The answer is yes, I
1:12:51
mean there have been a bunch
1:12:53
of efforts to make models that,
1:12:55
well, okay, one of the questions
1:12:57
is when you make a sort
1:12:59
of generic LLLM, one of the
1:13:02
discoveries is that knowing about lots
1:13:04
of stuff is useful even when
1:13:06
you're asking it about specific stuff.
1:13:08
You might have thought that knowing
1:13:10
a little bit about astronomy was
1:13:12
irrelevant if all you wanted to
1:13:14
talk about was ocean life. But
1:13:16
it turns out, and it's true
1:13:18
of us humans, that somehow having
1:13:20
that little bit of common sense
1:13:22
that comes from knowing something about
1:13:24
astronomy, is important everywhere somehow. We
1:13:26
don't completely know how, but the
1:13:28
idea that the LLLM can be,
1:13:30
oh, I'm a specialized LLLM, I
1:13:32
just do this, that tends to
1:13:35
be very narrow and tends to
1:13:37
be important to have this sort
1:13:39
of breadth of knowledge. Now, when
1:13:41
it comes to, did you ever
1:13:43
feed it? sort of the fanciest,
1:13:45
the best scientific papers, whatever that
1:13:47
means, the, there been efforts to
1:13:49
do that. It's all tied up
1:13:51
with, you know, well, who really
1:13:53
owns the rights to these things?
1:13:55
Can the LLLM ingest it? You
1:13:57
know, what happens if you get
1:13:59
a result from the LLLM and
1:14:01
how much of the undigested original
1:14:03
shows through? And there's a lot
1:14:05
of just practical ecosystem of the
1:14:08
world type questions there. But yes,
1:14:10
there's been a fair amount done
1:14:12
on this. So that was a
1:14:14
period of time maybe a year
1:14:16
ago when everybody seemed to be
1:14:18
talking about we're going to make
1:14:20
a special LLLM that's going to
1:14:22
be trained just on science. And
1:14:24
it didn't seem like that was
1:14:26
very promising, because as I say,
1:14:28
it seems like these shards of
1:14:30
common sense that come from different
1:14:32
areas are important. But just having
1:14:34
it know about lots of kinds
1:14:36
of scientific papers, a lot of
1:14:39
the high-end LLMs already know that
1:14:41
and have been trained on large
1:14:43
corpuses of those things. And that's,
1:14:45
you know, that is an interesting
1:14:47
question. I mean, the thing that
1:14:49
I'm about to really try seriously
1:14:51
with the latest models... is this
1:14:53
question of, okay, I'm looking for
1:14:55
experimental implications about physics project, and
1:14:57
that requires a certain amount of
1:14:59
putting together of different things of
1:15:01
sort of, of sort of trawling
1:15:03
out of the literature certain things,
1:15:05
where I can't just search for
1:15:07
it. It's a much vague question
1:15:09
that I have, you know, what
1:15:12
effect does it have on a
1:15:14
quasar if there's, if the dimensionality
1:15:16
of space changes? That's a... That's
1:15:18
a slightly, there are little shards
1:15:20
of information about that from different
1:15:22
places. Can you kind of aggregate
1:15:24
those together? And can you use
1:15:26
that as sort of a tool
1:15:28
for figuring out research? I do
1:15:30
think that my experience so far,
1:15:32
but not much effort at doing
1:15:34
this and it's become progressively more
1:15:36
plausible to do this as the
1:15:38
models have become more sophisticated and
1:15:40
trained on better material and so
1:15:42
on. But my initial take. is
1:15:45
that if you don't have any
1:15:47
idea where you're going, you won't
1:15:49
be led anywhere useful. In other
1:15:51
words, if you try and pull
1:15:53
the thing by the nose in
1:15:55
some direction, it will walk in
1:15:57
an interesting way, so to speak.
1:15:59
If you just say... hey, go
1:16:01
find me an interesting direction. It
1:16:03
will only tell you things that
1:16:05
you kind of already knew. It will
1:16:07
just sort of be, be, you know,
1:16:10
it will, it will revert to kind
1:16:12
of the mean of what people have
1:16:14
said before, so to speak.
1:16:16
But we'll see. I'll probably know
1:16:19
more about this in, well, fairly
1:16:21
short amount of time. All right, I
1:16:23
think it's time for me to go to
1:16:25
my day job here. But thank you
1:16:28
for a lot of interesting
1:16:30
questions and I'm sorry I
1:16:33
didn't get to lots of
1:16:35
other interesting questions that
1:16:37
were here. And I
1:16:39
clearly need to have a
1:16:41
wardrobe update and I will
1:16:43
perhaps do that by the time
1:16:45
of my next live stream. I
1:16:47
think I get to do another
1:16:50
one next Wednesday. So
1:16:52
we'll see. Check out the shirt
1:16:54
for next Wednesday. All
1:16:56
right, well, thanks for joining me and bye
1:16:58
for now. You've been listening to the
1:17:01
Stephen Wolfram podcast. You can
1:17:03
view the full Q&A series
1:17:05
on the Wolfram Research YouTube
1:17:07
channel. For more information on
1:17:10
Stephen's publications, live coding streams,
1:17:12
and this podcast, visit Stephen
1:17:14
Wolfram.com.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More