Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
End of life care can be
0:02
really quite tricky and people may
0:04
have heard of advanced care directives
0:06
but only 14% of older Australians
0:08
actually have one in place. Whilst
0:10
they can help guide end of
0:12
life care there can be some
0:14
complexities and often all situations aren't
0:16
covered so they're impossible to predict.
0:18
Like are you going to accept
0:21
antibiotics if you've got a sore
0:23
ear? You can't necessarily think of all
0:25
of it and this is where artificial
0:27
intelligence may have a role. Who knows? Our
0:29
producer Shelby Trainer has looked into
0:31
this, the role of AI in
0:33
this space and, as you alluded
0:36
to earlier Norman, there are some
0:38
convincing arguments here. I think it's
0:40
really important, having someone that can
0:42
speak in your behalf, that knows you
0:44
well, that knows how you'd like to
0:46
live your life and what sort of
0:48
things you would or wouldn't want at
0:50
the end of your life. Nicola
0:52
Champion is more familiar than most
0:55
people with the end of life.
0:57
She's a pallied of care nurse
0:59
who also took care of her
1:01
dad in his final weeks in
1:03
their hometown of Port Piri. Dad
1:05
was an interesting character, very independent,
1:08
very strong views on things, great
1:10
sense of humour. Nicola's dad Charlie
1:12
had prostate cancer but he'd lived
1:14
a relatively healthy and independent life.
1:16
until two years after his diagnosis,
1:18
when he arrived on Nicola's doorstep
1:21
out of sorts. And I opened the
1:23
door and he said, I thought you were
1:25
going to look after me when I got
1:27
sick. And I was really taking him back
1:30
because I thought, well, yeah, I will. It
1:32
turned out he was in a lot of
1:34
pain. Charlie's cancer had spread. He would have
1:36
six weeks left if he didn't do anything,
1:38
six months if he did. And my dad
1:40
just said, I'll take the six weeks. He
1:43
never once said, why me? He's just so
1:45
pragmatic and he was just like, well that's
1:47
it, that's my lot, I've got six weeks.
1:49
Charlie had an advanced care directive
1:51
that Nicola helped him write out.
1:53
An advanced care directive lets doctors
1:55
know what you do or don't
1:57
want at the end of life.
2:00
Would you want to be given antibiotics?
2:02
Would you want a blood transfusion if
2:04
it could extend your time? I don't
2:07
remember so much the conversation, but what
2:09
I know is that he fully trusted
2:11
me. So we didn't go down to
2:13
every scenario that he might want to
2:16
consent to or not consent to. He
2:18
just appointed me as his medical decision
2:20
maker. surrogate decision-makers are people you legally
2:22
designate to make decisions for you in
2:25
the event you become incapacitated. For example,
2:27
if you're in a coma or aren't
2:29
cognitively aware of what's going on. So
2:32
for me it was about really knowing
2:34
my dad's values in life and the
2:36
way he liked to live and the
2:38
way I think he wanted to die
2:41
and so that's what was going to
2:43
guide my decisions. Not everyone will have
2:45
a surrogate decision-maker. And even if they
2:48
do, those surrogates aren't always available in
2:50
an emergency. And they don't always feel
2:52
equipped to make tough calls. It can
2:54
be incredibly stressful to be the voice
2:57
of someone who can't speak for themselves.
2:59
It happens so frequently in the ICU where patients
3:01
are critically ill and can't make decisions for themselves
3:03
and you see their family and their surrogate decision
3:05
makers struggling with the burden of that choice. This
3:07
is Dr Teva Brenda, an internal medicine resident at
3:09
the University of California, San Francisco. He saw this
3:11
situation play out almost daily. The patient has a
3:14
breathing tube, they can't speak, or they're so ill
3:16
that they're confused, and every decision becomes this huge
3:18
pivot point. Do we continue antibiotics? Do we pursue
3:20
this procedure or this surgery? Only about 14% of
3:22
older Australians have an advanced care directive. So life-saving
3:24
or life-ending decisions can come down entirely to a
3:26
family member. And there's a lot they need to
3:28
know to make an informed decision. Dr. Brenda and
3:30
his mentor wondered if there was a better way,
3:32
a less stressful and more accurate way, to come
3:35
to these decisions. And so we were just chatting
3:37
together and... AI has obviously been in the news
3:39
since 2020 with ChatGPT and we thought, well, how
3:41
could we leverage artificial intelligence to help surrogates? Because
3:43
this is such a common problem in the ICU.
3:45
They decided to put an idea out there, drafting
3:47
a paper that theorized how generative AI or large
3:49
language models like ChatGPT might be used. Eventually they
3:51
brought in a geriatrician and palliative care doctor to
3:53
get his perspective. It's kind of funny, his first
3:56
reaction to our proposal was, heck no, that's frightening,
3:58
that's dystopian, do we really want to be considering
4:00
this? But Dr. Brenda and his colleagues are far
4:02
from the only people thinking about this. My name
4:04
is Brian Erp and I'm an associate professor of
4:06
biomedical ethics at the National University of Singapore. At
4:08
the same time AI was having its first of
4:10
many moments in the sun, Dr. Earp was working
4:12
as an editor on the Journal of Medical Ethics.
4:14
In that journal, experts were trying to come up
4:16
with better ways to deliver end-of-life care, in line
4:19
with a person's wishes. One proposal that was gaining
4:21
attention was called a patient preference predictor. The idea
4:23
here is that you would do a big survey
4:25
of the population, you would give people various scenarios
4:27
that they might encounter, and you would ask them,
4:29
what would you like to have happen if you
4:31
found yourself in this situation, this situation, this situation,
4:33
and so on? and then you would also collect
4:35
various demographic features about people their age and their
4:37
sex and their ethnic background, their religious affiliation, maybe
4:40
their socio-economic status. So instead of everyone filling out
4:42
an advanced care directive, a sample of the population
4:44
would fill one out, an extensive one. It was
4:46
assumed that people with the same demographic features would
4:48
make similar decisions at the end of life. This
4:50
seems to solve some of the problems, but it
4:52
also was met with a lot of criticisms. The
4:54
biggest one? People don't like being reduced to their
4:56
demographic features. We like to think of ourselves as
4:58
unique, not just in age, sex, and ethnicity.
5:01
Predictor Erb was watching the
5:03
debate about this preference
5:05
predictor play out at the
5:07
same time he and
5:09
his colleagues were playing around
5:11
with AI. My then
5:13
housemate was also my friend
5:15
and collaborator, Sebastian Porstam
5:17
Mann. He realised there were
5:19
these interfaces you could
5:21
gain access to where you
5:24
can further train a
5:26
general model in a process
5:28
called fine -tuning. Basically, you
5:30
can tweak it to
5:32
fit a specific purpose. They
5:34
ended up feeding their
5:36
AI model dozens of their
5:38
own research papers. The
5:40
purpose was to teach the
5:42
model to recognise the
5:45
relationship between a paper's abstract,
5:47
which is a short
5:49
summary of the research, and
5:51
the research itself. And
5:53
once it's learned that relationship,
5:55
you can put in
5:57
a new title and a
5:59
new abstract of a
6:01
paper that you haven't yet
6:03
written, but that maybe
6:06
you plan to write, and
6:08
then you press go,
6:10
and in a matter of
6:12
seconds it will just
6:14
generate a draft of a
6:16
paper in your voice
6:18
using your style of reasoning,
6:20
drawing on the kinds
6:22
of arguments that you've used
6:24
in the past, but
6:27
applying it to this new
6:29
topic. This proved that
6:31
the AI was able to
6:33
learn Brian's views and
6:35
apply that knowledge to new
6:37
situations. So, he thought,
6:39
what if it could be
6:41
used to learn your
6:43
views on things, for example,
6:45
whether you'd want to
6:47
be put on a ventilator
6:50
if it gave you
6:52
six more months of life.
6:54
There was a problem
6:56
though. I work professionally in
6:58
philosophical ethics, so I
7:00
write dozens of papers explicitly
7:02
stating what my views
7:04
are on various topics, and
7:06
so it's not a
7:08
big surprise that this sort
7:11
of model can come
7:13
up with a reasonable guess
7:15
about what I might
7:17
say morally about some situation,
7:19
including a hypothetical situation
7:21
in which I were incapacitated.
7:23
So, we did some
7:25
informal experiments with our academic
7:27
paper model, and we
7:29
just asked it, you know,
7:32
suppose I was in
7:34
X, Y, and Z conditions,
7:36
what do you think
7:38
I would want to have
7:40
happen, and why, and
7:42
the model would typically give
7:44
an answer that's at
7:46
least plausible. So, if you
7:48
haven't written dozens of
7:50
papers on medical ethics, who
7:52
has, how would an
7:55
AI model learn about your
7:57
preferences? Well, one way
7:59
could be through social media.
8:01
There already are existing
8:03
digital duplicates. of us out there in the
8:05
world that are owned by technology companies. So there's
8:07
a digital twin of me that Amazon owns. And
8:09
there's a little twin of me that Facebook owns
8:11
and that Twitter owns and so forth. And they
8:13
use these to predict my preferences in particular domains,
8:16
namely so that they can sell me stuff. Clearly
8:18
it's possible to predict some things about people based
8:20
on information you might think is not obviously or
8:22
directly relevant. Another approach would be to train
8:24
AI on your medical records. He is Dr
8:26
Bender again. So this morning I was
8:28
in our lung transplant clinic. We have
8:30
this technology. It listens to the conversation
8:33
you're having with the patient. And at
8:35
the end of the visit, it summarizes that
8:37
in an after visit summary that the patient
8:39
can take home. And it's really truly impressive.
8:41
So what if we could record these visits
8:44
and capture all of the nuance that happens
8:46
in that 20 or 30 minute visit, all
8:48
of the chit-chat in the door, that really
8:50
speaks a lot about what's important to that
8:53
person, what do they do over the weekend,
8:55
how is their family, etc. Some of these
8:57
things really do seem like they might be
8:59
far-fetched and you might think that you'd have
9:02
a much better go if you focused on
9:04
specific prior treatment decisions people have made.
9:06
So it's one thing to just be
9:08
chit-chatting with my doctor, but if I
9:10
train a model on my electronic health
9:13
records, for example, which show all the
9:15
various other decisions I've made, there's at
9:17
least a question about the extent to
9:19
which I can extrapolate from those kinds
9:21
of decisions to novel cases that
9:23
I won't have yet encountered. But
9:25
imagine this AI could find out
9:27
your vaccination status, or whether you
9:29
went for a more aggressive approach
9:31
to cancer treatment. Dr. Urb has even
9:34
suggested training the AI not passively,
9:36
using data collected from the internet
9:38
or from your medical records, but
9:40
directly, you could take charge of
9:43
building your own digital duplicate. Whether
9:45
an AI should therefore make a
9:47
decision is a different question, because
9:49
it's an ethical question. It's not
9:51
a technical question. And this raises
9:54
another problem, one that AI may
9:56
or may not solve. Nicola was
9:58
familiar with the health... care system.
10:00
She trusted she could make the right
10:02
choices for her dad. But even with
10:04
her expertise and an advanced care directive
10:07
in hand, she struggled to get doctors
10:09
to listen. The way we'd worded his
10:11
advanced care directive was in the terminal
10:13
phase of a terminal illness. or if
10:15
he was in a persistent vegetative state
10:17
that I would then speak on his
10:20
behalf. But when she tried to discharge
10:22
her dad from hospital following a procedure
10:24
so he could die at home, she
10:26
was told he wasn't necessarily terminal because
10:28
they hadn't explored all options. options that
10:30
Charlie had stated he did not want to
10:32
pursue. I just couldn't get anyone to listen
10:34
to me that dad was told he would
10:37
have six weeks, he accepted that he was
10:39
terminally ill, was going to have six weeks,
10:41
that he wanted to be back in the
10:43
country town where we were from, and I
10:45
really had to fight hard to try and
10:47
get him home. It's not good enough to
10:49
have an advanced care plan. People have to
10:51
respect it. And they have to believe that
10:53
if someone's been named as a decision maker,
10:55
that that that person didn't do that lightly.
10:57
Nicholas thinks that if there had been a
11:00
digital duplicate of her dad in the room
11:02
that day, it might have swayed the doctors.
11:04
If they'd asked the AI, okay, what's important
11:06
to you, he would have been able to
11:08
say, I don't want a fast, I want
11:10
my daughter to look after me. But as
11:12
Dr. Earp points out, that all rests
11:14
on AI being proven accurate and being
11:17
trusted, and for that to happen, there
11:19
needs to be a level of transparency
11:21
around how the AI reaches the conclusions
11:23
that it does. Suppose the model says
11:26
it's likely maybe with 80% confidence
11:28
based on all the various factors
11:30
that I've considered that John would
11:32
want to have treatment withheld in
11:34
this situation and some of the
11:36
major evidence I'm using for this
11:38
claim is that John wrote an
11:40
email on January 22nd of 2015
11:42
explicitly saying to his friend Bob
11:44
that if he ever was unable
11:46
to feed himself, he would definitely
11:48
not want to live under those
11:50
conditions. Well, that would be like
11:52
explicit evidence that people could then
11:54
evaluate. They could say, oh, well, let's go check
11:56
with Bob and confirm that that's true. And oh, okay,
11:58
so that seems like a... pretty strong expression of
12:00
his values. I haven't ever heard him
12:03
say anything otherwise, so maybe we should go
12:05
with that. Whatever is the type of
12:07
evidence that it raises is the sort of
12:09
thing that Canon should be then evaluated
12:11
by people who know the person. It might
12:13
be that the person's spouse is there
12:15
and asked to make the decision, and the
12:18
model comes up with some reason, and
12:20
the spouse says, look, that's not anything that
12:22
actually reflects the John I know, and
12:24
then, okay, now you have a difficult discussion
12:26
to make, and you have to do
12:28
some further investigation. I think most people recognize
12:30
this isn't ready for primetime quite yet.
12:32
So we're not saying that this could or
12:35
should be done, but it's interesting to
12:37
think about because that would be ideal, right?
12:39
If there was a future where as
12:41
a physician, I could sit down with the
12:43
family and say, hey, this is what
12:45
this algorithm suggests. Hopefully there is some transparency
12:47
there. How does that sit with you
12:49
as somebody who knew and loved this person?
12:52
And that can sort of be a
12:54
jumping off point. There's a long way to
12:56
go in proving AI decision makers are
12:58
accurate and finding out whether people would ever
13:00
actually trust them to help make life
13:02
or death decisions. Which is why I asked
13:04
Brian what he would do. I have
13:06
thought about this. I mean, a lot depends
13:09
on how predictively accurate these models turn
13:11
out to be once we've trained them and
13:13
tested them. And so right now I
13:15
sort of shrug my shoulders and say, well,
13:17
it kind of depends. If it turns
13:19
out that they're far more accurate on average
13:21
than human surrogates, then I'd be more
13:24
likely to want to have it used in
13:26
cases where I lost capacity. Whether I
13:28
would want to use it if a loved
13:30
one lost capacity and I was the
13:32
proxy decision maker, I suppose that my sense
13:34
of curiosity would be great enough that
13:36
I would be very interested in what the
13:38
prediction was. But then I would also
13:41
want to know that I could trust myself
13:43
enough to not be unduly swayed by
13:45
the prediction if I know that I have
13:47
good reason to trust my own judgments
13:49
about the person. Nicola looks back at her
13:51
time as her dad's carer and is
13:53
glad she could be his voice in his
13:55
final weeks. If it weren't for her,
13:58
it's possible Charlie would have died in hospital
14:00
rather than back at home. I felt
14:02
I knew what was important to my dad
14:04
and I feel like top of his
14:06
list was me to care for him and
14:08
my understanding was for that to be
14:10
in our home. He never specifically said that, but I still think about
14:12
him turning up to my front door. You know, I thought you'd look after me
14:15
when I was sick. And she only felt reassured about her decision when they arrived
14:17
back in Port Piri. He had the biggest smile on his face as we wheeled
14:19
him in that back door, and it's something that I will always treasure, because I
14:21
just thought I know I made the right decision. That
14:24
was Nicola Champion finishing off that story
14:26
by Shelby Trainer and early you heard
14:28
from Dr. Brian Erp and Dr. Tever
14:31
Brenda.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More