The Quest to ‘Solve All Diseases’ with AI: Isomorphic Labs’ Max Jaderberg

The Quest to ‘Solve All Diseases’ with AI: Isomorphic Labs’ Max Jaderberg

Released Tuesday, 29th April 2025
Good episode? Give it some love!
The Quest to ‘Solve All Diseases’ with AI: Isomorphic Labs’ Max Jaderberg

The Quest to ‘Solve All Diseases’ with AI: Isomorphic Labs’ Max Jaderberg

The Quest to ‘Solve All Diseases’ with AI: Isomorphic Labs’ Max Jaderberg

The Quest to ‘Solve All Diseases’ with AI: Isomorphic Labs’ Max Jaderberg

Tuesday, 29th April 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

have set up the company from day one

0:02

to really go after this big ambition. This

0:05

isn't about

0:07

developing therapeutics for a particular

0:09

indication or a particular

0:11

target. It's really thinking

0:13

about how do we create a

0:16

very general drug design engine with

0:18

AI, something that we can apply

0:20

to not just a single target

0:22

or even a single modality, but

0:24

we can apply this again and

0:26

again across any different disease area. And

0:29

that's what we're stepping towards at the moment. Today

0:47

we're excited to welcome

0:49

Max Yatterberg to the show, Chief AI

0:51

Officer of Isomorphic Labs, which launched

0:53

out of deep mind with the

0:55

goal of revolutionizing drug discovery using

0:58

AI. Last summer,

1:00

they released AlphaFold 3, a

1:02

stunning breakthrough that allows us to

1:04

model not just the structure

1:06

of proteins, but of all molecules

1:08

and their interactions with each other. That

1:11

led to Demisosobis winning the Nobel Prize

1:13

in Chemistry last year. Max

1:16

describes their vision for what a holy grail

1:18

model for drug design and what agents for

1:20

science look like. He draws parallels

1:22

to his experiences building AlphaStar and

1:24

capture the flag. and the

1:26

research directions of building agents and games

1:28

more broadly. Specifically,

1:31

with 10 to the power of 60

1:33

possible drug molecule structures, we need to

1:35

build both generative models and agents that

1:37

can learn how to explore it and

1:39

search through the whole potential design space. Max

1:43

also describes his vision for what a GPT -3

1:45

moment for the field might look like. Describing

1:48

it more akin to AlphaGo's famous

1:50

Move 37 when we start to

1:52

see things that exhibit superhuman levels

1:54

of creativity, in AI drug design, and

1:57

that stun even humans ourselves. This

2:00

is one of my favorite episodes yet.

2:02

Enjoy the show. Max,

2:05

thank you so much for joining us today

2:07

here in London. It's a pleasure to be with

2:09

you here. Yeah, it's fantastic. Awesome

2:12

timing, too, with the launch of Alpha Fold 3

2:14

and with Demis winning the Nobel Prize in Chemistry,

2:16

which is a true testament to everything that you

2:18

and your team have done over the last couple

2:20

of years. Yeah, 2024 was

2:22

definitely a busy... for

2:24

us. Lots of big breakthroughs. The

2:26

Nobel Prize was just incredible to see. You

2:29

know, I think amazing recognition for this for

2:31

this seminal piece of work. Yes. Well,

2:33

I'd love to start with talking

2:35

a little bit about your own personal

2:37

story. You've had an incredible career

2:40

in the world of deep learning from

2:42

the very start, authoring many seminal

2:44

papers while at DeepMind, including for Capture

2:46

the Flag and Alpha Star. breakthroughs

2:49

in the world of deep learning. Can

2:51

you walk us through some of the

2:53

key questions that you had in your

2:55

field of research around deep reinforcement learning

2:57

at the time? Yes,

3:00

so at DeepMind, I

3:02

worked on a whole host

3:04

of stuff, early days of computer

3:06

vision and deep generative models,

3:08

but it was really reinforcement learning

3:10

that ended up hooking me

3:13

there. DeepMind was the place in

3:15

the world to be working on reinforcement learning at

3:17

that time. You

3:19

know, really the question in our minds was,

3:22

how can we actually get to a

3:24

point where we could get an

3:26

AI that could go off and do

3:28

any task you wanted it to

3:30

do? And, you know,

3:33

the dominant paradigm at that

3:35

point in time was supervised

3:37

learning. And

3:39

supervised learning is very different from reinforcement

3:41

learning. They're both learning techniques, but

3:43

supervised learning, you need to know what

3:45

the answer to your question is.

3:47

And that's how you train the model.

3:49

So in supervised learning, you give

3:51

an example and then you supply the

3:53

model with the answer to that

3:55

question. Now,

3:57

that can be great if you already

3:59

know everything about the problem that you're

4:01

training this AI to do, this

4:03

neural network to do. But most times

4:05

you don't. Yeah. I mean, there's

4:08

just so many problems in the world where

4:10

we don't know what the answer is. We

4:12

don't know what the solution is. And if you

4:14

think about, you know, I think about how I want

4:17

AI to be applied to the world. Yes,

4:19

it's going to be great to be

4:21

able to apply things where we're already

4:23

good as humans here. But really, you

4:25

know, the big frontier is can we

4:27

start applying AI to places where, you

4:29

know, humans don't know how to do

4:32

this stuff or, you know, there's a

4:34

limit to human performance there. And,

4:36

you know, that's where reinforcement

4:38

learning is. one of the key

4:40

tools and has real promise

4:42

here because in reinforcement learning, you

4:44

don't need to know what

4:46

the answer to the question is.

4:48

You just need to be

4:50

able to say whether the answer

4:52

that the model gave you

4:54

was good or not good. Maybe

4:56

even how good or not

4:58

good. So this opens up a

5:01

completely new field of problems

5:03

to train these models against. And

5:06

so reinforcement learning and

5:08

really starting from what

5:10

was one of the big breakthroughs

5:12

of DeepMind in the early days was

5:14

working on games like Atari. Yes.

5:16

The question was, okay, so how can

5:18

we scale this up from the

5:21

world of Pong and space invaders to

5:23

things that really start to look

5:25

like real problems in the world? And

5:28

so there was an amazing track

5:30

of research as we scaled up these

5:32

methods. Yeah. Did you know that

5:34

Sequoia was the first investor in Atari

5:36

back in the day? Oh, really?

5:39

I didn't know that. That's incredible. Yeah,

5:41

no, those Atari games were great fun

5:43

actually to sort of go back and play

5:45

in the context of, hey, we've got

5:47

an agent and I'm just going to have

5:49

a game of pong on the sides

5:52

as well. There's a

5:54

wonderful wall at Sequoia in our

5:56

office where we have all these

5:58

names of legendary IPOs and M &As

6:00

that have happened. And there's one.

6:03

I think it's called the Pizza

6:05

Company. And I love asking

6:07

folks if they know what that is.

6:09

And it's actually from Chuck E. Cheese's, which

6:11

was an original Sequoy investment at the

6:13

time. Amazing. Amazing. So

6:16

Capture the Flag and Alpha Star

6:18

were incredible breakthroughs at the time.

6:20

Can you share a little bit

6:22

about what exactly those breakthroughs were

6:24

and maybe why you chose those

6:26

specific games? Yeah.

6:28

So, you know, if you think about

6:30

the history of AI, using

6:33

video games. Why do we use

6:35

video games at all? Video

6:37

games are these sort of

6:39

malleable, perfectly encapsulated worlds

6:41

that as researchers and scientists,

6:44

we can manipulate them,

6:47

we can test out different algorithms in them,

6:49

we can set up different situations. So

6:51

the perfect test ground for us to develop

6:53

new algorithms. And

6:56

then you can imagine as a

6:58

RL... is someone who's like thinking about

7:00

how can we get AI to

7:02

be as general as possible. You're always

7:04

thinking, okay, we've cracked Atari, how

7:06

do we get a more complex game? And

7:10

the thing that I was

7:12

personally obsessed with is I

7:14

want these agents to be

7:16

able to zero shot, be

7:18

able to do any task.

7:21

And this is a slightly different paradigm from

7:23

the, from what people were doing at

7:25

the time with training on Atari where Normally

7:28

in reinforcement learning you think about, here's a

7:31

game, now you get to train on it

7:33

and get good at it. And then you

7:35

apply that same algorithm from scratch training on

7:37

different games. I'd

7:39

love a different scenario where instead we

7:41

train an agent and then we can lift

7:43

it and put it on any new

7:45

task and that agent will be able to

7:48

perform well in that task without any

7:50

more training. And

7:52

so to do that, what you're

7:54

really asking for is generalization over task

7:56

space. And

7:58

that means you need lots and

8:00

lots of training tasks. So

8:03

the training data in this

8:05

RL for agents becomes tasks,

8:07

not images, not pieces of

8:09

text, but tasks. And so

8:11

you can imagine you could go

8:13

and sit and take a whole

8:15

game studio and try and hand

8:17

author hundreds of different tasks, lots

8:19

of little mini games in these

8:21

virtual worlds. And

8:23

we did that. We were doing lots of

8:25

that. And then you can

8:27

think, yeah, we can actually go

8:30

further than hand -authoring. We can

8:32

procedurally generate these tasks and games, generating

8:35

worlds and maps

8:37

and different objectives. And

8:39

we did that. But

8:42

you keep running into this

8:44

complexity ceiling that there's only so

8:46

much complexity that you can

8:48

hand -author or you can design

8:50

humanly. But that's where

8:52

multiplayer games come in. Because

8:55

as soon as you go from single player

8:57

to multiplayer, it's not just the agent playing.

8:59

You've got another player in this game. And

9:03

that other player or other

9:05

players can take on many different

9:08

characteristics and many different behaviors. So

9:10

every different player, every different strategy that

9:12

you're up against changes fundamentally the game

9:14

and what the agent is trying to

9:17

do. I go back

9:19

and think, Why are

9:21

people still obsessed with playing chess? Why

9:23

does a professional chess player still keep

9:25

playing chess? It's the same game. But

9:28

it's actually not because you're playing completely different

9:30

opponents day after day and new people into

9:32

the world. So

9:34

the game is continually changing. So

9:37

multiplayer games and multi -agent

9:39

games really encapsulates that huge

9:41

diversity of tasks that you

9:43

might encounter just from other

9:46

players being there. And

9:48

so capture the flag was

9:50

actually one of our first

9:52

forays into How can we use

9:54

multiplayer games to really stretch

9:56

what our reinforcement learning algorithms can

9:58

do to? Really force us

10:00

to think strongly about how we

10:02

can generalize to new tasks

10:04

how we deal with these multi

10:07

-agent dynamics So capture the flag

10:09

was a fantastic breakthrough Really

10:11

showed that we could get to

10:13

human level performance for these

10:15

multiplayer first -person games. Yeah, and

10:17

of course starcraft added on

10:19

a huge amount of complexity and was sort

10:21

of the next frontier that we had

10:23

to go after for this. You were so

10:25

early in this that so many of

10:27

these concepts are very, very relevant today in

10:29

the world of language. How does it

10:31

feel to see some of this work continue

10:33

to be played out? Yes,

10:35

brilliant. It's just

10:37

fantastic, actually. There were so many things

10:39

that we were talking about at time.

10:41

Seven years ago, yeah. You know, 2015,

10:44

16, 17, 18. to

10:48

see all of these core fundamental

10:51

concepts be really useful and really

10:53

applicable today in the world of

10:55

large language models. And

10:57

resulting in performance that we could

10:59

only really dream about at the time.

11:01

That's incredibly satisfying. So

11:04

then in your own words,

11:06

you said that you moved from

11:08

building toys to then finding

11:10

real applications. When did you know

11:12

that you found the right recipe? I

11:17

just love deep learning. I've

11:19

been obsessed with deep learning for

11:21

10, 15 years now. And

11:24

the thing that I love

11:27

about it is that you

11:29

have these underlying core concepts,

11:32

these fundamental building blocks

11:34

that are somehow incredibly

11:36

transferable between different application

11:38

spaces. Yes. So,

11:41

you know, it's the same building

11:43

blocks that we were using in computer

11:45

vision in 2012, as we were

11:47

using in, you know, generative models in

11:49

language, you know, then reinforcement, et

11:51

cetera, et cetera. So

11:54

what I was seeing just

11:56

again and again was this

11:58

ability to take these core

12:01

concepts, these same core concepts,

12:03

take incredible people who understand

12:05

how to you know, they're

12:07

almost like master chefs of

12:09

putting these these concepts together

12:11

in these different building blocks

12:14

together. Take a

12:16

team of incredible people and

12:18

go after, you know, really,

12:20

really challenging problems, you know, problems that you

12:22

go to conferences at the time and you

12:24

talk to leading researchers in the field. They

12:26

say, no, no, no, this is 10 years

12:28

away. And in the back

12:30

of your mind, you know, okay, we

12:32

actually we basically cracked it. Wow. And

12:34

I saw that happen again and again

12:36

and again. You

12:39

know, you take amazing people, amazing

12:41

algorithms, amazing compute on really challenging

12:43

problems and we can find recipes

12:45

now to crack so many problems. And

12:48

so it just got to

12:50

the point where and I've always

12:52

been quite obsessed with the

12:54

application of these methods. I

12:57

want to see this technology

12:59

have, you know, real transformative positive

13:01

impacts in the world. And

13:04

so. we need to start actually going

13:06

after that and the time has been

13:08

right for I think a few years

13:10

now. Well,

13:12

so you've now had a decade

13:14

long relationship working together with one

13:17

of the greatest scientists, technologists and

13:19

founders of our lifetime, Demis.

13:22

He called you while you were still at Oxford. And

13:25

then your company, Vision Factory

13:27

and DeepMind were both acquired by

13:29

Google back in 2014, around

13:31

the same time. And that's when

13:33

the two of you started

13:35

to work together now for over

13:37

10 years. What was it like

13:39

or what has it been like to work with Demis?

13:42

Yeah, I mean, Demis is

13:44

an incredible person, you

13:47

know, a real character and

13:49

a real visionary. And,

13:52

you know, also amazingly human

13:54

and relatable. And I think

13:56

that that really inspires people.

13:58

So, you know, it only

14:00

takes a five minute conversation

14:02

to, for him to sort

14:04

of really bleed out the

14:06

depth of ambition that he

14:08

thinks about. And

14:11

just the immediacy of

14:13

the potential to step

14:16

towards these ambitions. So

14:18

I think he has

14:20

this great ability to

14:22

inject a lot of

14:24

energy into a group

14:27

of very smart people,

14:29

get people to see beyond what's

14:32

right in front of them. I

14:35

remember moments sitting or

14:37

standing in the lobby

14:39

of one of the early Deep Mind offices. I

14:41

think this was the, it was a

14:43

toast. We were a celebration

14:45

we were having for the first nature paper from

14:47

Deep Mind. And

14:49

Demis was saying, you know, this

14:51

is actually just going to

14:53

be the first of dozens of

14:55

nature papers. And at the

14:58

time this was the first, basically the

15:00

first machine learning paper in nature. This was

15:02

the Atari DQN paper. And

15:04

the prospect of dozens of nature

15:06

papers, you know, it seems bit

15:08

far fetched and actually he went further

15:10

and said, and we're going to be

15:12

winning no prizes as a result of

15:15

this. And that was 10 years

15:17

ago. Yeah, that's incredible. the

15:19

forethought that he has. He's got what

15:21

I call like one of these roll out

15:23

minds. Maybe it comes from all of his experience

15:25

playing chess, but it's he's always, you know,

15:27

rolling out into the future. What are

15:29

the steps now that are going to lead, you know,

15:31

to this big ambition? So

15:34

yeah, it's been it's been fantastic. I've been

15:36

working with him for about 10 years now. still

15:40

work really closely together on isomorphic

15:42

labs. And the

15:44

ambition is as big as ever. It's so

15:46

interesting to hear that you had this ambition

15:48

and that he had this ambition from

15:50

the very start. And it's

15:52

incredible that it's played out that way. Well,

15:55

I'd love to talk a little bit

15:58

about isomorphic. You're now embarking on one

16:00

of the most ambitious missions of our

16:02

generation to reimagine drug discovery and drug

16:04

development with AI. Everything

16:06

goes right and you realize your

16:08

vision for isomorphic. What does the world

16:11

look like? Yeah,

16:13

you know, we think really

16:15

big isomorphic. We want

16:17

to be solving all

16:19

diseases here and genuinely that

16:21

scale. And the

16:23

point is that this technology

16:26

that we're building and AI

16:28

as a whole field is

16:30

going to be completely transformative. in

16:33

how we understand biology, in

16:36

our ability to manipulate

16:38

and craft chemistry to modulate

16:40

that biology. So

16:42

we really think about a

16:44

future where we are solving all

16:47

diseases where AI is not

16:49

just helping us discover and create

16:51

and design new therapeutics, but

16:53

also just understand so much more

16:55

about our biological world, about

16:57

how our you know cells

16:59

are working and what are

17:02

the root causes of disease and

17:04

therefore opening up new pathways

17:06

that we can think about modulating.

17:09

So we have set up

17:11

the company from day

17:13

one to really go after

17:15

this big ambition. This

17:17

isn't about developing

17:19

therapeutics for a particular

17:21

indication or a

17:23

particular target. It's really

17:25

thinking about how do we create

17:27

a very general drug design engine

17:30

with AI, something that we can

17:32

apply to not just a single

17:34

target or even a single modality,

17:37

but we can apply this again and

17:39

again across any different disease area. And

17:41

that's what we're stepping towards in the moment. How

17:44

does setting out with

17:46

this ambition of being

17:48

general change how you

17:50

built in practice from

17:52

day one? Yes, a

17:55

good question. When

17:57

I think about some of

17:59

the status quo of AI and

18:01

drug design, there's been a

18:03

lot of use of machine learning

18:05

models in chemistry and biology,

18:07

but I would call them a

18:09

lot of the first generation

18:11

of this sort of application to

18:13

be more local models where

18:15

you might have some data about

18:17

a particular target or about

18:20

how a particular class of molecules

18:22

is behaving and you'll fit

18:24

a small multi -layer

18:26

MLP against this data to

18:28

help you generate some predictions

18:30

that lead to your next

18:32

round of design. This

18:36

is the complete opposite approach

18:38

of what we were trying to

18:40

do. So from day one,

18:42

we were setting out to create

18:44

models that generalize across chemistry

18:46

and across target space. So,

18:48

you know, and a key example

18:50

of this is something like Alpha Fold

18:52

and Alpha Fold 3 where This

18:55

is a model that you can apply

18:57

to a whole different host of targets.

18:59

You can apply it to any protein

19:01

in the proteome, in the universe of

19:03

proteins. You can apply

19:05

it to any small molecule that you

19:07

can think of designing without needing

19:09

to fine -tune it, without needing to

19:11

fit any local data. And so you

19:13

can imagine that it completely changes

19:16

the way that chemists can use these

19:18

models if you don't need to

19:20

be adapting this model to every single

19:22

application. every

19:24

single one of our internal research projects.

19:26

And by the way, when I think

19:28

about what we're going to need to

19:30

get this breakthrough drug design engine that

19:32

we've been building, we

19:35

need like half a dozen alpha

19:37

folds. Alpha fold is just part

19:39

of the story. So

19:41

from day one, we've been

19:43

setting up these internal research programs

19:45

going after these half dozen

19:47

problems. We've had significant

19:50

breakthroughs, obviously in alpha folds and structure

19:52

prediction, but also in other key

19:54

areas. And

19:56

in all of these,

19:58

these models are general. They

20:00

can be applied to any target. And

20:03

then what we're finding actually, they can be

20:05

applied to any modality or lots of

20:07

different modalities. Yeah. So that's the first time

20:09

I've heard you say half a dozen

20:11

alpha folds. Can you share a little bit

20:13

more about what that means? Yeah.

20:15

So. Alpha -fold was obviously a

20:17

massive breakthrough in understanding biomolecular

20:19

structure. So what is the structure

20:21

of proteins? And now with

20:24

Alpha -fold, three structure proteins with

20:26

small molecules and things like DNA

20:28

and RNA. That's

20:30

a fundamental step change. It allows

20:32

us to get experimental level accuracy

20:35

of a really core concept of

20:37

biochemistry that unlocks a whole bunch

20:39

of thinking and design work for

20:41

chemists. My

20:44

comment here is actually we're probably

20:46

going to need something like half a

20:48

dozen more of these sort of

20:50

breakthroughs. They're sort of getting to experimental

20:52

level accuracy of different core concepts

20:54

of biology and chemistry. To

20:56

be able to put this

20:58

together into something that's really

21:00

transformative for drug design. Drug

21:03

design is really, really hard. It's not

21:05

just a single problem. It's not just

21:07

about understanding the structure of a protein.

21:10

It's not even just about designing a

21:12

molecule that will modulate that protein in

21:14

the way that you want. You want

21:16

this molecule to be able to ideally

21:18

be taken as a pill and go

21:20

through the body and be absorbed in

21:23

the right way and reach the right

21:25

cell type and actually go into the

21:27

cell and not be broken down by

21:29

the liver in a certain way. So

21:31

there's just so much complexity to hold

21:33

on to as a drug designer. And

21:36

each one of those is like

21:38

an alpha fold level style breakthrough that

21:40

we've been creating. So interesting. Well,

21:43

I've also heard you use the words

21:45

a holy grail model for drug design

21:47

and agents for science. Can you

21:49

explain a little bit more about what you mean? Yes.

21:51

So some of these research areas

21:53

that we've been going after, predicting

21:56

structure and properties of

21:58

these molecules and how all

22:01

of these biomolecules interact

22:03

and play out over time.

22:05

These really are sort of

22:07

holy grail. predictive problems for

22:09

drug design. And

22:11

we've made some incredible breakthroughs there,

22:14

which have really stunned our

22:16

chemists and step changed how we

22:18

do drug design internally, Iso. But

22:21

what I think a really interesting

22:24

thing to think about is that

22:26

you could create the best possible

22:28

predictive model of the world, like

22:30

an experimental level, even

22:32

better than experimental level model. to

22:35

predict a particular property about a molecule,

22:37

for example, to be able to

22:39

predict the outcome of a real experiment. See,

22:41

we can have a whole suite of those,

22:43

but that still wouldn't solve drug design. And

22:48

the way to think about this is, you

22:51

know, there's this number 10 to

22:53

the power of 60, which is

22:55

perhaps all of the possible drug

22:58

-like molecules that you could, that

23:00

could exist. you

23:02

know, that's maybe, that's maybe, that's maybe,

23:04

you know, a bit, you know, takes

23:06

into account a lot of things. So

23:08

we could even reduce that by 20

23:10

orders of magnitude, get to 10 to

23:12

the 40. That's still a lot

23:14

of things. And even if you

23:16

had the best predictive models in the

23:18

world, so let's say you could screen

23:20

a billion different molecules, you could go

23:22

and test a billion different molecules, that's

23:24

10 to the nine. So, you know,

23:26

now we're still like 10 to the

23:28

31 molecules left on the table. So

23:31

even with the best predictive models,

23:33

you're still not even scratching the surface

23:35

of molecular space that you should

23:37

be exploring. And this is

23:39

why we need to go

23:41

beyond just predictive models of experiment,

23:43

but also models like generative

23:45

models, like agents that can

23:47

actually navigate that whole 10 to

23:49

the 40, 10 to the 60

23:51

space. That's so interesting. Using our

23:53

predictive models, obviously, to understand how

23:55

to navigate that. But so we

23:57

don't have to exhaustively search because

23:59

we can never exhaustively search the

24:01

whole universe of molecules. If

24:03

that makes sense, just in the

24:05

same way that AlphaGo couldn't exhaustively search

24:07

all of the possible go moves, unlike

24:10

chess, where you could exhaustively search all

24:13

possible chess moves. But yeah, molecule designs

24:15

much more like go than it is

24:17

like chess. So

24:19

that's where generative models come

24:21

into play. Agents. that

24:23

utilize generative models, that utilize

24:25

search techniques as well as

24:28

these amazing predictive capabilities to

24:30

really open up the entirety

24:32

of molecular space. Now,

24:34

to me, it's actually still amazing

24:36

that even without AI, we managed to

24:38

find drugs in this 10 to

24:40

60 space, 10 to the 40 space.

24:44

It just says that actually there's probably a

24:46

lot of redundancy. There's a lot of

24:48

potential designs. If you think about

24:50

a particular disease indication, a particular target,

24:54

There should be many designs that exist

24:56

that would be good for that and

24:58

would be the right sort of product

25:00

profile for this therapeutic. And

25:03

I think the real potential

25:05

here is for these generative models,

25:08

these agents as well, to

25:10

be able to search through this

25:12

space and really uncover that

25:14

whole potential design space. That's so

25:16

interesting. I think in very

25:18

simplistic Lehmann terms, you're both... modeling,

25:21

learning and modeling the game

25:23

and trying to build the best

25:25

player to solve different types

25:27

of games. Yeah. So it's, I

25:29

mean, you know, I'm incredibly

25:32

biased by games. been playing

25:34

video games since I was a kid, grew

25:36

up in that world. But,

25:38

you know, that's exactly how I

25:40

think about it. We've got to

25:42

be creating our world models, our

25:44

models of the biochemical world, our

25:47

biological world. And

25:49

then we don't stop there. We

25:51

actually then need to be creating

25:53

agents and generative models that can

25:55

work out how to explore, how

25:57

to traverse that, and to basically

25:59

uncover these amazing needles in the

26:01

haystack in chemical space, which could

26:03

be life -changing therapeutics for so many

26:06

millions of people. I love that.

26:08

That is our punchline today. So

26:11

Alphapole 3 is truly groundbreaking. You've

26:14

taken us from being able to model

26:16

just the structure of a protein to now

26:18

being able to model the structure of

26:20

all molecules and their interactions with each other.

26:23

Can you share a little bit about how we

26:25

should think about that in terms of the

26:28

impact in accuracy, in speed and

26:30

efficiency, and also potentially in

26:32

being able to explore problem

26:34

spaces that we couldn't solve

26:36

before this? Yeah, so, yeah,

26:39

AlphaFol2 was, you

26:42

know, the biggest breakthrough,

26:44

right? To be able to understand

26:46

the structure of proteins, and

26:48

then there was something called alpha fold

26:50

to multimer, which then allows you to understand

26:52

not just the structure of proteins by

26:55

themselves, each individual protein, but the structure of

26:57

proteins as they come together and what

26:59

we call complexes, so how these proteins fit

27:01

together. That

27:03

opens up and helps us answer

27:05

a lot of questions in biology,

27:07

but there's still a big hop

27:09

to designing therapeutics. And one

27:11

of the big classes of therapeutics

27:13

is what's called small molecules. So

27:16

these are molecules that are not

27:18

proteins. These would be things like caffeine

27:20

or paracetamol, things that more often

27:22

you can take as a pill. And

27:26

the way that these therapeutics work with

27:28

these small molecules is that they go

27:30

through the body, they go into the

27:32

cell, and they actually come and attach

27:34

themselves to these proteins. These

27:37

proteins... the fundamental building blocks of

27:39

life. They form these molecular machines

27:41

by interacting with other proteins And

27:43

so you can you can imagine

27:45

that if you have another molecule

27:47

your drug that comes in and

27:49

attaches itself to a protein over

27:51

here Then it might disrupt the

27:53

ability for that protein to interact

27:55

with another protein one of its

27:57

normal machine and day -to -day life

27:59

And so you're modulating the function

28:01

of that protein with this small

28:04

molecule and that's the essence of

28:06

drug design and how

28:08

therapeutics work. And so you

28:10

can imagine as a chemist, your

28:12

job, a drug designer, you're trying

28:14

to design a small molecule that's going

28:16

to fit to this protein over

28:18

here and disrupt how it normally functions,

28:21

or in some cases, enhance how

28:23

it normally functions. And so

28:25

it'd be really helpful to understand

28:27

how this small molecule interacts with

28:29

the protein, what's the structure that

28:31

it might make, what are the

28:33

interactions, these literally physical interactions that

28:35

are being made. And

28:38

so that really inspired the

28:40

creation of AlphaFol3 where now

28:43

we have a model that

28:45

not only predicts the structure

28:47

of proteins, but how these

28:49

proteins interact with small molecules,

28:51

also other fundamental molecular machine

28:53

building blocks, things like DNA

28:55

and RNA. And

28:58

this basically opens up the ability to

29:00

structurally understand, which is a core part

29:02

of drug design, small

29:05

molecules. It opens

29:07

up new classes of targets. You

29:10

know, there are things like transcription

29:12

factors, which are proteins that sit on

29:14

DNA and read DNA. And you

29:16

can imagine now trying to design a

29:18

small molecule to change or disrupt

29:20

the function of something like that. And

29:22

so to do that, you'd really

29:24

want to be able to see literally

29:26

in 3D how this all looks.

29:28

And if I make changes to my

29:30

little molecule, how will that change

29:32

the way it interacts with this protein

29:34

and this biomolecular system? So Alpha

29:36

Fold 3 is now very, very accurate.

29:39

It allows us to answer a lot of

29:41

these questions purely in silico or purely

29:43

on the computer where before you would have

29:45

to go to the lab, literally crystallize

29:48

this stuff. This can take six months. It

29:50

can take years. Sometimes it's even impossible. Now

29:53

at ISO, our drug designers

29:56

are literally sitting with their laptop.

29:58

browser -based interface, being able to

30:00

understand, make changes to their

30:02

designs, and see the impact of

30:04

that. Incredible. So

30:06

there are a couple

30:08

of interactions that AlphaVol3 is

30:11

focused on, proteins in

30:13

nucleic acids, proteins in ligands,

30:15

and antibody to antigen.

30:17

Can you give us some

30:19

good examples of the

30:21

impact that AlphaVol3 now has

30:23

on the interaction of

30:25

these different types of proteins

30:27

and molecules? Yeah, so

30:29

protein and ligands, that's the same as

30:31

protein and small molecules. So those two

30:33

terms, ligands and small molecules are synonymous.

30:36

That allows us to understand how

30:38

small molecule drugs interact. Then

30:41

we can think

30:43

about protein -protein

30:45

interactions. There's

30:47

a whole class of therapeutics called

30:49

biologics. These are things like antibodies.

30:53

That allows us to understand how they might

30:55

interact with our targets. opens

30:57

up new modalities. And

31:00

that also encapsulates the

31:02

sort of the antibody anti

31:04

-gen interface. So if you're

31:06

designing an antibody, you

31:08

want to understand how your antibody

31:10

design is going to interact with

31:12

the protein surface there. So it's

31:14

the same model that we can

31:16

use across all of these different

31:18

applications. What are the

31:20

nuances of training a model like AlphaFol

31:22

-3? And what are the benefits of

31:24

using a diffusion -based architecture? Yes,

31:27

a great question. There were

31:29

a lot of challenges we had to overcome to get

31:31

AlphaFold 3 to work. One

31:33

of the most interesting things

31:35

was actually just how do we

31:37

take something like AlphaFold, which

31:39

was only working with proteins, and

31:41

then input these new modalities,

31:43

these new data types of RNA,

31:45

DNA, small molecules. So

31:47

we had to work out how to tokenize, not

31:49

just proteins, which we kind of knew how to

31:52

do, but how to tokenize then DNA, how

31:54

to tokenize small molecules. For things

31:56

like DNA and RNA, that's a little bit more

31:58

obvious. We could tokenize in the

32:00

bases. But then for

32:02

small molecules, we would really go to, we

32:05

tried a whole bunch of different

32:07

stuff. It really ended up that this

32:09

atomic resolution tokenization worked super well. And

32:13

then you have the question of, okay, how do

32:15

you actually predict

32:17

the structure of this

32:19

mixture of different molecule

32:21

types. You couldn't use

32:23

the same framework as

32:25

AlphaFol2 and this is

32:28

where diffusion modeling just

32:30

really shunned. Here

32:33

we could actually model

32:35

every single individual atom and

32:37

the coordinate of every

32:39

atom individually and have a

32:41

diffusion model be producing

32:43

those 3D coordinates and the

32:45

tokenization that we talked

32:47

about is conditioning the inference

32:49

of that diffusion process.

32:51

So interesting. And this was

32:53

a huge breakthrough. So,

32:55

you know, we're talking about

32:57

on our leaderboard just

32:59

a massive step change, particularly

33:01

in small molecule protein

33:03

interaction accuracy. It was a

33:05

massive step change and

33:07

something that really unblocks the

33:09

rest of the project.

33:11

Wow. So data

33:13

compute and algorithms. We

33:15

know those three are important in

33:17

all other adjacent fields. But

33:19

I was surprised to read an interview

33:22

with Demis where he shared that we're

33:24

not data constrained in biology. Can

33:26

you share your point of view on that? You

33:28

know, I think it doesn't matter what field

33:30

of machine learning you're in, you're going to feel

33:32

some data constrained. And

33:34

I think the point here from

33:37

Demis is that it's not a

33:39

real bottleneck, as in we can

33:41

make progress. with the data that

33:43

is out there, that the data

33:45

we can generate and real progress

33:47

can be made. It's

33:50

not, we've got to

33:52

sit and wait 50 years for the world

33:54

to generate data before we can actually make

33:56

impact here. No, we're not seeing that at

33:58

all. Modeling spaces

34:00

where the data has been

34:02

sitting around for years,

34:04

that we can see that

34:06

we can make really

34:08

substantial progress beyond anything that

34:10

people have experienced before. Now,

34:13

does that mean there's no opportunity for

34:15

data in biology? Absolutely not. It's

34:17

going to be a fundamental

34:19

part of how we continue

34:21

to develop these models and

34:23

these systems will be what

34:25

data we go out and

34:28

generate. And

34:30

there, I think, there's just a massive

34:32

opportunity. In my mind, the

34:36

data The data

34:38

for machine learning in biology hasn't actually

34:40

been created yet. Yes. Yes, there's a lot

34:42

of historical data, but there's a huge,

34:44

but that historical data hasn't been created for

34:47

the purposes of machine learning. And

34:49

so when you're going out and thinking, how

34:51

do I create data to actually train my model?

34:53

You're thinking in a very different way to

34:55

how people have gone out and generated data in

34:57

the past. And that there's a big opportunity

34:59

there to explore. What kind of

35:01

data do you think we're missing here right

35:03

now? And do we think, do you

35:05

think that we need? Anything

35:07

in synthetic data? Yes,

35:09

so I'm a massive fan of

35:11

synthetic data actually I have been

35:13

for since the very beginning of

35:16

my career where You know we

35:18

would I was generating synthetic text

35:20

data Just to overcome the fact

35:22

that you know I was a

35:24

PhD student with access to a

35:26

couple of thousand images and Google

35:28

had millions and millions of images

35:30

and so instead I just generated

35:32

tons and tons of synthetic data

35:34

and that unblocked things and We're

35:36

seeing the same thing in the,

35:39

especially the chemistry space where we

35:41

have good theory. We actually

35:43

know a lot about physics. We

35:45

know we have the theory

35:47

of quantum chemistry and quantum mechanics

35:49

and we can create simulators

35:51

out of that. We can

35:53

approximate that and create more scalable

35:56

molecular dynamic simulations. This

35:58

gives the basis for a whole

36:00

host of synthetic data. Then

36:02

we have the models themselves that Especially

36:05

we have generative models. This

36:07

can actually generate data that

36:09

we can use scoring systems

36:11

to help really enhance the

36:13

information content of this data.

36:16

But I think one of the big

36:18

open spaces will be on what's

36:20

called in vivo data. So

36:23

data that you would normally measure

36:25

on a real animal, something like a

36:27

mouse or a rat. There's

36:30

some historical data on that, but

36:32

you can't generate. tons of

36:34

that you can't really generate any

36:36

at all, right? So then there's a

36:38

big opportunity to look to new

36:40

data generating technologies. There are some incredible

36:42

people doing things like organoids on

36:44

a chip. So

36:47

ways of starting to measure

36:49

things that you would normally measure

36:51

on a real animal, but,

36:53

you know, completely on a chip.

36:55

So, you know, I think

36:57

there's gonna be a whole host

36:59

of like new breakthroughs in

37:02

data generating. technology in biology

37:04

and chemistry that's going to have big

37:06

impact on how we think about modeling

37:08

that world as well. Are you working

37:10

on any of that internally or

37:12

are you hoping that other players can

37:14

fill in some of that gap? So

37:17

internally, we actually don't have

37:19

any of our own labs

37:21

in isomorphic labs, but we

37:23

work with a whole bunch

37:25

of other companies. We

37:28

generate a lot of data ourselves, a

37:31

lot of proprietary data. We've seen an amazing impact

37:33

of that. It makes a lot of sense. So

37:36

there's a point of view that

37:38

modeling structure of molecules and modeling

37:41

their function and the modulation function

37:43

is very important, but not necessarily

37:45

always the limiting factor in drug

37:47

development. What's your point of view

37:49

on that? Yeah, as

37:51

I touched on one before, drug design

37:53

is really, really complex. And that's

37:55

before you even get to drug development,

37:57

which is where you take those

37:59

designs and you start putting them into

38:02

real people, clinical trials. There

38:04

are so many bottlenecks throughout

38:06

this whole design and development space.

38:10

Drug development is how

38:12

do we start

38:15

to approach clinical trials?

38:17

How should we test these drugs out in people? How

38:19

can we do this in a really timely manner, but

38:22

still a really safe manner? There's

38:25

a lot of bottlenecks there that I

38:27

think the industry as a whole. We

38:29

will need to work out

38:31

how to innovate in that space,

38:33

especially as our predictive models

38:35

of how these molecules will interact

38:37

with people, how toxic they

38:40

will be. As these predictive

38:42

models get better and better, we will

38:44

have to change the way that

38:46

we approach clinical trials to really make

38:48

use of that. Ultimately, to get

38:50

therapeutics into the hands of patients who

38:52

really desperately need them. Even

38:54

in the design of

38:57

molecules themselves as we talked

38:59

about before it's not just understanding

39:01

the structure of these molecules

39:03

it's not even just understanding how

39:05

these molecules change the function

39:07

of these proteins but we need

39:09

to understand how these molecules

39:12

change the function of pretty much

39:14

every single protein in our

39:16

body right because if we take

39:18

this as a pill it's

39:20

going to go everywhere and that's

39:22

the major cause of toxicity

39:24

is when Yes, you've designed

39:26

this amazing molecule that like perfectly

39:28

modulates your specific target that you know

39:31

is key to your disease But

39:33

also affects other things but it also

39:35

affects other things now, of course

39:37

you do a lot of screening to

39:39

protect against that but The more

39:41

we can predict that the better What's

39:44

really exciting from my perspective is

39:46

if we're creating these general models that

39:48

understand how this molecule interacts with

39:50

this target But also any other target

39:53

then why can't we just use that same

39:55

model to understand how these molecules interact

39:57

with the rest of our body? Right, so

39:59

interesting. So what is

40:01

now possible with AlphaFold 3 for

40:03

drug designers? How are you

40:05

using it internally? So AlphaFold

40:07

3 gives our drug designers

40:10

the ability to understand how

40:12

their molecule designs really interact

40:14

with this protein target. And

40:16

this is the target of

40:18

disease. And so our

40:21

drug designers can make changes to

40:23

the design and then see instantly

40:25

how that changes the way that

40:27

this molecule physically interacts with the

40:29

protein target. That's really,

40:31

really powerful. Before

40:33

our third, you would be completely

40:36

blind to this. You wouldn't actually

40:38

probably know how your molecule is

40:40

interacting with your protein. You'd

40:42

be using your best intuition, maybe

40:44

somewhere down the line in the

40:46

drug design project, you would get

40:48

your structure crystallized with a particular

40:50

design. That means going out to

40:52

a real lab six months later,

40:55

if you're lucky, getting a

40:57

resolved 3D structure. But

41:00

even then, that's just the 3D structure of

41:02

a single design, not every single change

41:04

that you make. Yeah. So

41:06

Alpha 3 completely changes the way

41:08

chemists can do this design work. But

41:10

I was stressed. nowhere

41:13

near as far as we want to go. Because

41:15

it's not just about what these molecules

41:17

look like in terms of interacting. We

41:19

actually want to know how strongly these

41:21

molecules interact with this protein. We

41:24

want to know other properties of

41:26

these molecules. We want to understand

41:28

how the way that these molecules

41:30

interact with this protein and how

41:32

that changes the fold or the

41:35

confirmation of the protein, how that

41:37

changes the function of the protein,

41:39

how it might actually change the

41:41

dynamics of the cell. There were

41:43

so many questions and these are

41:45

these other alpha fold like breakthroughs

41:48

that we're working on that also

41:50

go, you know We have created

41:52

incredible models for that our chemists

41:54

are using in this design process

41:56

interesting So you're designing some drugs

41:58

internally what targets and programs are

42:01

you focused on? So we have

42:03

a really exciting internal program of

42:05

drug design projects. These are focused

42:07

on immunology and oncology We've

42:09

been making some incredible progress there.

42:11

It's been really exciting to see

42:13

especially how these models have transformed

42:15

the way that we're actually approaching

42:18

drug design on these these programs.

42:20

You're also working with Eli Lilly

42:22

and Novartis and recently you announced

42:24

an expansion with Novartis's partnership. Can

42:26

you share a little bit about

42:28

what these partnerships look like? Yes,

42:31

so we we signed these

42:33

initial partnerships two

42:35

partnerships, one with Eli Lilly, one

42:37

with Novartis. That was fantastic. They

42:39

brought some really, really challenging problems

42:41

to us. I think it's no

42:43

secret that, you know, the sort

42:46

of targets that, for example, Novartis

42:48

brought to us, these are these

42:50

are sort of targets that, you

42:52

know, the field and Novartis, for

42:54

example, been working on for, you

42:56

know, 10 years plus. So

42:59

these aren't sort of

43:01

old. We'll try things out,

43:03

problems. These are for real

43:05

hard things. Last

43:08

year was an amazing year, both

43:10

for our internal projects, but also

43:12

for these partner projects to really

43:14

see how well these models are

43:16

working. It's allowed

43:18

us to really uncover new chemical

43:21

matter, working on new ways

43:23

to modulate these targets that people

43:25

have worked on for a long time.

43:28

It's been amazing to see this

43:31

new deal which has expanded on the

43:33

Vartis collaboration, which I think is a

43:35

real testament to some of the success

43:37

of the early days of these partnerships.

43:40

Congratulations. I think it's an incredible

43:42

milestone, especially just one year in. Yeah.

43:44

So I'd love to talk a little bit

43:46

about the team. You've built a truly

43:48

excellent team composed of the highest caliber talent

43:50

across many different fields, AI,

43:52

chemistry, biology, and

43:54

you've also brought outsiders into the

43:56

field to help question traditional thinking. Can

43:58

you share a little bit about how

44:00

you thought about this? Yeah,

44:03

so the space of

44:05

AI for drug design

44:07

hasn't really existed for

44:09

very long. So

44:11

the chances of finding a

44:13

world expert at drug design

44:15

who's also a world expert

44:18

and machine learning or deep

44:20

learning is basically zero. Just

44:23

because these fields, these fields haven't

44:25

coexisted for long enough. I genuinely

44:27

think about a new sort of

44:30

a field of science that ISO

44:32

is breeding because we are, you

44:34

know, we have these people who

44:36

really live and breathe the intersection

44:38

of this. So,

44:40

you know, because because we

44:42

can't hire these people, you know,

44:44

I really think about how do

44:46

we bring the world experts at

44:48

drug design and medicinal chemistry? and

44:51

the world experts at machine learning and

44:53

deep learning, and get these

44:55

incredible people sitting side

44:57

by side, because it's

44:59

not just enough to have

45:01

these amazing people sitting in their

45:03

isolated teams. We

45:06

need people sitting side by side,

45:08

speaking each other's languages with

45:10

a lot of empathy, a

45:12

lot of curiosity, curiosity

45:14

to understand this new science,

45:16

to really build intuitions in

45:18

your own language. And

45:20

we've seen just such amazing

45:23

things come out of this

45:25

dynamic where you really have

45:27

a generalist machine learner who

45:29

doesn't know anything about chemistry

45:31

or biology, start to come

45:33

in and understand the problems

45:36

of a medicinal chemist and

45:38

a drug designer. And

45:40

when I think about even

45:42

hiring machine learners and machine learning

45:44

scientists and engineers for the

45:46

research that we're doing, I'd

45:49

say, you know, 60, 70,

45:51

80 % of the people on

45:54

our team have no prior knowledge

45:56

of chemistry or biology, maybe,

45:58

you know, high school or university

46:00

level. And that can

46:02

actually be a real asset because

46:04

you come in sort of a

46:06

little bit naive. And

46:08

as long as you're curious, I think

46:10

one of the key things is

46:13

asking, you know, the curious questions, asking

46:15

this like stupid questions. And

46:17

then that allows us to

46:19

come at the problems from

46:21

first principles. It almost allows

46:23

us to break through the

46:26

dogma of previous experience and

46:28

how people traditionally approach these

46:30

problems. We can think ground

46:32

up from scratch. And that's

46:34

a lot of the mentality of how

46:36

we think about creating these research breakthroughs.

46:38

A little naive and highly curious and

46:41

high agency is a very good thing.

46:43

Yes, exactly, exactly. So in

46:45

November last year, you also made a very

46:47

big move in launching the AlphaFold server, which

46:49

releases code and model weights for

46:51

academic use. Can you share a little

46:53

bit about why? Yeah,

46:55

so AlphaFold has a long,

46:57

long lineage of being open

47:00

for this academic and scientific

47:02

use. And it

47:04

was really important with this

47:06

latest breakthrough of AlphaFold

47:08

3 that we make sure

47:10

that this scientific community has

47:13

access to this functionality because, you

47:15

know, yes, outfall three is going to

47:17

be incredibly useful for drug design.

47:19

It already is. But it's

47:21

also useful for, you know,

47:23

many other areas of fundamental biology

47:26

and just understanding biology. And

47:28

people are using these people are

47:30

using outfall three server and

47:32

modeling in very, very creative ways.

47:35

So, you know, it's very important for

47:37

us to make sure that there is

47:39

that sort of free use for non -commercial

47:41

academic work. And it's been

47:43

incredible to see the take up of that

47:46

and the use of the server. Let's

47:48

talk a little bit about the future. Can

47:50

you give us a tease of what else is

47:52

to come with AlphaFold? In

47:55

terms of structure prediction

47:57

as a problem, in

47:59

my mind, I want

48:01

to completely solve this. I

48:04

think AlphaFold 3 is a... fantastic step

48:06

on the way of that. There's a

48:08

significant breakthrough. But

48:11

it's not 100 % accuracy. What

48:14

does even 100 % accuracy mean in

48:16

this space? Like

48:18

with a lot of areas of

48:20

science, as you start to push the

48:22

boundaries, you see that the problem

48:24

opens up into even more problems. That's

48:27

the addictive part of doing science.

48:32

Alpha Fold 3 is a good example

48:34

of that, where as you start to

48:36

get these capabilities, you see

48:38

that actually there are even more

48:40

deeper problems that we want to be

48:43

working on and stepping towards. So

48:45

yes, understanding structure better and better and

48:47

more accurately is always going to

48:49

be interesting for us. But then it's

48:51

also not just necessarily about static

48:53

structure. So Alpha Fold 3 models these

48:56

crystal structures. which are

48:58

almost static crystallized versions

49:00

of these molecules, how these

49:02

molecules interact. But in

49:04

reality, we don't have crystals inside

49:06

of us. These molecules are in solution.

49:08

They're moving about the dynamic. So

49:10

you can think, OK, well, maybe

49:13

understanding the dynamics of these systems is

49:15

actually also going to be really

49:17

interesting. What does a

49:19

GPT -3 moment look like in AI

49:21

biology? And when do we get

49:23

there? So if I think about

49:25

GPT -3, This

49:27

is really a generative model. So

49:30

something that's generating text. And

49:32

the GPT -3 moment for

49:34

me was, you

49:37

know, crossing over that

49:39

boundary between, yeah,

49:41

we've got generative models of text and

49:43

they generate some stuff and it looks

49:45

like text, but I'm not convinced that

49:47

it's generated by a human. And

49:50

GPT -3 started to be that first

49:52

point where you're like... shit. This

49:54

kind of looks like a human. And

49:57

so this generative model is

49:59

actually recreating the distribution of

50:01

data that is trained on.

50:04

And what is a generative model? Generative

50:06

model is something that fits the manifold

50:08

of data that is trained on it.

50:10

So when I think about this applied

50:12

to biology, you can

50:14

think about these generative

50:17

models actually starting to

50:19

recreate that GPT -3

50:21

moment. recreate what things

50:23

would actually look like in reality. And

50:26

that's quite exciting because that means

50:29

that these models are spitting out things

50:31

that either they actually exist in

50:33

the world and we can kind of

50:35

validate that or maybe even discover

50:37

new things that exist in the world

50:39

or they could exist in the

50:41

world, which means that they could be

50:43

things that we could design or

50:46

manufacture or create. that would actually

50:48

be stable and work and exist in

50:50

our physical reality. And I

50:52

think the cool thing

50:54

about this in biology is

50:56

that, unlike with language,

50:58

where with language, when

51:00

it generates something human level quality,

51:02

we can understand that because it

51:04

is human derived. But

51:06

a lot of problems in chemistry and

51:08

biology, we even struggle to understand ourselves. And

51:10

so when we get to that GPT -3

51:12

moment, I think it will look a

51:14

lot less like GPT -3, but much more

51:16

feel a lot more like move 37 in

51:18

AlphaGo. Interesting. Where we're starting

51:21

to see things that are beyond

51:23

human understanding, but that do exist

51:25

in the real world, that exist

51:27

in our physical reality, but

51:29

are beyond sort of human comprehension.

51:31

Right. And that's just going to

51:33

be mind blowing. In fact, you

51:36

know, we're starting to see that

51:38

internally with our generative models, that

51:41

we're creating designs that a

51:43

human drug designer would say,

51:46

Hmm, I'm not so sure about that.

51:48

I much prefer this and then

51:50

you test it out in physical reality

51:52

and The generative model is correct

51:54

and the human is wrong. That's fascinating

51:57

I love the move 37 analogy

51:59

when the model starts to see elements

52:01

of creativity and exactly past the

52:03

human move 37 was this Amazing move

52:05

during the Alpha go games against

52:07

Lisa doll it was, you

52:09

know, the 37th move of the game

52:11

and it stunned the world, stunned the go

52:14

world because it was uninterpretable by human.

52:16

It looked like a mistake. No one had

52:18

ever played this move in the entirety of,

52:20

you know, thousands of years of human history

52:22

playing go. And it turned out

52:24

as you unrolled the game that this was

52:27

the critical move that allowed AlphaGo to beat

52:29

Lee Sedol in that match. And we're going

52:31

to see so much of that sort of

52:33

behavior coming out of these models, especially when

52:35

we're applying them to Things

52:37

outside of of native human

52:39

understanding like chemistry and biology.

52:41

Yeah, I love that also

52:43

our punchline today So when

52:45

we see our first AI

52:47

generated drug in clinic and

52:49

also in phase one two

52:51

and three trials so we're

52:53

making amazing progress on our

52:55

drug design programs and you

52:57

know the thing I think

52:59

about actually is as We

53:01

start to get a whole

53:03

bunch of these AI

53:06

designed assets, these molecules

53:09

get into clinical phase.

53:12

How can we actually start

53:14

to think about engaging in

53:16

that clinical development to get

53:18

these molecules to people as

53:20

fast and as safely as

53:23

possible because there's so much

53:25

unmedical need? So

53:28

yeah, here I think about

53:30

what are gonna be new

53:32

ways to engage with regulatory

53:34

bodies? What are going to be

53:36

new ways to incorporate our predictive models for

53:38

not only how this molecule works for the disease,

53:41

but how, as we talked about, how it

53:43

interacts with the rest of the body, the

53:45

types of toxicity it may induce.

53:48

I think there'll be a lot

53:50

of opportunities to think about just

53:53

streamlining and speeding up this process,

53:55

maybe even completely changing the way

53:57

we think about human clinical trials

53:59

as we, you know, our AI models

54:01

become so, you know, we can design

54:03

these molecules so much quicker in a much

54:05

more targeted manner with so much more

54:07

knowledge about how they work. Yeah. So

54:10

that'll change the game. But I think

54:12

we've got a long way to go as

54:14

an industry to really work out how

54:16

that changes. Yeah. Last question. As

54:18

isomorphic succeeds and potentially as a

54:20

whole field succeeds, what happens to

54:22

the traditional world of pharma? I

54:25

think they become, you know, in

54:27

some sense, pharma be

54:29

using AI. I think I think there's

54:31

no world when five years time you

54:33

will be designing a drug without

54:35

AI. That an inevitability.

54:39

It'll be like you know, trying

54:41

to do science without using

54:43

maths. AI will be this fundamental

54:45

tool for biology and chemistry.

54:47

It already is, at least in

54:50

isomorphics world, that

54:52

everyone will be

54:54

using. So it's not

54:56

going to be, Oh, is it pharma or it

54:58

AI? There's to be one and the same

55:00

in sense that the whole industry will adapt to

55:02

that. Yeah. Amazing. Max,

55:04

thank you so much for joining us today. This

55:06

was a fascinating conversation. been a pleasure. Thank you.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features