Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:07
Welcome to the Skeptics Guide to
0:10
Emergency Medicine. Meet 'em, greet 'em, treat 'em
0:12
and street 'em. Today's date is April
0:14
15th, 2025, and I'm your skeptical host,
0:16
Ken Milne. The title
0:18
of today's podcast is Together
0:20
in Electric Dreams,
0:23
or is it reality? And
0:26
our guest skeptic is Dr.
0:28
Kirstie Chalen. She is a
0:30
consultant in emergency medicine at
0:32
Langshire, U .S. Schler Teaching
0:34
Hospital. Welcome back to the
0:36
SGM, Kirstie. Hey,
0:39
Ken. I
0:41
look forward to mispronouncing that every single
0:43
time you're on, because I just can't
0:45
get my head around it. What is it?
0:49
Divided by a common language or
0:51
much. Lancashire. Lancashire.
0:55
Not shire like a hobbit,
0:57
but Lancashire. Getting closer.
1:00
Moving on. I've
1:03
been waiting all month to be doing
1:05
this one because I'm just all so excited
1:07
about artificial intelligence. So give us a
1:09
case. It
1:11
may be April, but as
1:13
you sit in your departmental meeting
1:16
with your emergency physician colleagues, you
1:18
will note that the winter surge
1:20
of patients doesn't seem to
1:22
have stopped and the decision fatigue
1:25
at the end of shifts
1:27
is as present as ever. Surely
1:29
AI will be making some of
1:31
these decisions better than us students. That's
1:34
one of your colleagues only half
1:36
joking. Another colleague chips
1:38
in that the med students
1:40
at the nearby university have been
1:42
warned against using chat GPT
1:44
to create differential diagnoses. And
1:46
you're left wondering whether AI
1:48
might be working in
1:51
the ED in the near future. It
1:53
can seem like a conveyor belt of
1:55
human misery sometimes in the emergency
1:57
department where you're going, when
2:00
will it end? But there are ebbs
2:02
and flows and the whole idea
2:04
of don't say the Q word gets
2:06
back to regression of the mean.
2:08
But we're not here to talk about
2:10
some of those fallacies. Let's talk
2:13
about emergency departments and how they can
2:15
be real high pressure environments. Clinical
2:17
decisions, boom, we need to make
2:19
them quickly sometimes and accurately. often
2:22
with incomplete information. Clinical
2:25
decision support or CDS
2:27
tools aim to address
2:29
this challenge by offering
2:32
real -time evidence -informed recommendations
2:34
that could help clinicians
2:36
make better diagnostic, prognostic
2:38
and therapeutic decisions. And
2:41
CDS span a wide
2:43
spectrum from traditional paper -based
2:45
clinical decision rules or
2:47
tools to smartphone apps.
2:49
She's trying to trigger
2:51
me with the rules
2:53
there. Oh,
2:56
they can be on smartphone
2:58
or web -based apps like
3:00
MDCalc to more integrated
3:02
systems within electronic health records
3:04
or EHRs. These tools
3:06
function by combining patient
3:08
data with expert -driven algorithms
3:11
or guidelines to inform care
3:13
pathways. They can
3:15
help determine disease likelihood, risk
3:17
stratify patients, and even
3:19
guide resource utilization. such
3:21
as imaging or admission
3:23
decisions. And remember,
3:26
they are called
3:28
guidelines, not
3:30
guidelines. Thou shalt. Recent
3:33
years have seen a growing
3:35
interest in applying artificial intelligence, AI,
3:38
particularly machine learning, to
3:40
clinical decision support. Unlike
3:43
traditional knowledge -based CDS,
3:45
that relies on literature -based
3:47
thresholds, AI -driven tools
3:49
derive patterns from large
3:52
data set or, the
3:54
air quotes, big data. To
3:57
identify associations and make
3:59
predictions, these non -knowledge
4:01
-based systems promise to augment
4:03
human decision -making by uncovering
4:05
insights that may have
4:07
been overlooked by clinicians or
4:09
the static rules. However...
4:13
the majority of
4:15
AI based CDS
4:17
tools. Remaining
4:20
early development, few have
4:22
been rigorously tested in the ED, even
4:25
fewer have demonstrated improvements
4:27
in patient outcomes or clinician
4:29
workflow. Despite FDA
4:31
clearance for some tools,
4:34
evidence for real world impact remains
4:36
limited. And emergency
4:38
physicians are right to
4:40
approach this technology with
4:42
skeptical optimism. We'll
4:44
need to balance the
4:46
transformative potential of AI with
4:49
a critical eye toward
4:51
evidence, safety, and
4:53
usability. Clinical
4:55
workflow. It is very important,
4:57
but ultimately we want to see
4:59
the poo. We want to
5:01
see patient -oriented outcomes. So
5:03
what's the clinical question we're going to
5:05
be asking on today's SGM? It's
5:09
a two -parter. One.
5:12
What is the current
5:14
landscape of AICDS tools
5:16
for prognostic, diagnostic and
5:18
treatment decisions for individual
5:20
patients in the ED? And
5:23
2. What
5:26
phase of development have
5:28
these AICDS tools achieved? So
5:31
what's the reference for this episode? Karimi
5:33
et al. Artificial
5:36
intelligence -based clinical decision
5:38
support in the emergency
5:40
department. a scoping review. And
5:43
that's in Academic Emergency
5:45
Medicine, April 2025. Yes,
5:48
that's right. It's another
5:50
hot off the press episode.
5:53
So let's run through the Peacot.
5:55
What was the population? Studies
5:58
involving AI, that's
6:01
artificial intelligence or ML,
6:04
machine learning based clinical decision
6:06
support tools applied to
6:08
individual patient care in the
6:10
ED. published 2010 to
6:13
2023. And there were a number
6:15
of exclusions and we'll list those in the
6:17
show notes. How about the intervention or what were
6:19
they looking at? AI
6:21
or ML based clinical decision
6:23
support tools used during patient
6:25
care in the ED. And
6:28
did they have a comparison group? It's
6:30
not really applicable for a scoping
6:33
review. However, the
6:35
review identified whether the
6:37
studies involved any comparison
6:39
with usual care. clinician
6:41
judgment or non -AI tools.
6:44
And what were the outcomes? So
6:47
the review didn't focus
6:49
on a single outcome, but
6:52
instead categorized studies by
6:54
their targeted clinical decision task,
6:56
diagnosis, prognosis, disposition, treatment,
6:58
etc. Outcomes
7:00
were only included if they
7:02
were relevant to emergency
7:05
clinicians' decision making, such as
7:07
predicting ICU admission, mortality.
7:09
only for intervention. And
7:11
I know we've mentioned it a couple of
7:14
times already, but what type of study is
7:16
this? This is a scoping review. Yeah,
7:18
which is different than a
7:20
systematic review. Well,
7:24
it is also an SGM
7:26
hot off the press, and we
7:28
were pleased to have the
7:30
first author on the show. He
7:33
is an emergency physician researcher
7:35
at Vancouver General Hospital who is
7:37
exploring ways to improve the
7:39
development and implementation of artificial intelligent
7:41
models in emergency medical care.
7:44
Welcome to the SGM Hashem. Hi,
7:47
thanks for having me on the show. Well
7:49
as you know because we we talked
7:51
prior to recording when I set this up
7:53
in the last week or so. You
7:56
know, I'm doing this D fill
7:58
or PhD in artificial intelligence and evidence
8:00
based medicine the intersection between those
8:02
two in an attempt to overcome my
8:04
natural stupidity, of course, but you
8:06
know, you look like a smart guy.
8:08
So what got you interested in
8:10
AI? Honestly,
8:12
it was my life. She
8:17
was completing a master's
8:19
in finance and her main
8:21
project was around using
8:23
machine learning to predict stock
8:26
prices. So I
8:28
was hearing about the variety of
8:30
factors and the pure volume of
8:32
data they had to consider and
8:34
that made me think of our
8:36
work in emergency departments, you
8:38
know, our complex patients, all the variables
8:40
we're considering in our decisions. I
8:43
think also the fact that the
8:45
hospital I was training in at the
8:47
time was transitioning to an electronic
8:49
health record also just made clear to
8:51
me how much data we would
8:53
potentially be sitting on and so I
8:55
felt we should explore ways to
8:57
leverage that data to care for our
8:59
patients better and make our jobs
9:01
easier. I don't know about
9:03
you, Kirstie, but I love hearing these backstories about
9:05
how did you end up where you ended
9:07
up because it's not usually, hey, I've got this
9:10
master plan and I'm going to set up
9:12
all these steps and there I am. I
9:14
mean, the same reason I did
9:16
my MBA was because Barb suggested I
9:18
do my MBA because we really
9:20
couldn't communicate at the dinner table very
9:22
well because she's very very good
9:24
at finance and business and stuff like
9:26
that and I would not appropriately. So
9:29
Hashim were you just nodding appropriately at the
9:31
dinner table as your wife talked about things and
9:33
you said maybe I should look into this
9:35
so I can contribute to the conversation. Yeah,
9:39
more or less. I still do
9:41
that a lot with anything else
9:43
finance related, but yeah, I'm trying.
9:45
Yeah, it's one of our physician
9:47
superpowers. It's the rare physician that
9:49
has a really good business acumen. All
9:51
right. Well, we're not doing a business
9:53
podcast. We're doing a nerdy podcast about the
9:56
medical literature. So, okay, Hashem, can you
9:58
give the actual conclusions that your authors came
10:00
up with? Yeah, so
10:02
our conclusion was that we
10:04
found a large number of
10:06
studies involving a variety of
10:08
clinical applications, patient
10:11
populations, and artificial
10:13
intelligence models. But
10:15
despite an increased rate of
10:17
publication in recent years, few
10:19
studies have advanced from
10:21
pre -clinical development to
10:23
later phases of clinical
10:25
evaluation and implementation. Alright,
10:28
so we're going to go through a
10:30
checklist that we derive from systematic reviews,
10:32
but we're going to use it for
10:34
this scoping review. Kirstie, what was the
10:37
main question being addressed? Was it addressed
10:39
clearly? Yes, it was.
10:42
Was the search for studies detailed and
10:44
exhaustive? Yes, they
10:46
used five databases and
10:48
searched the grey literature. You
10:50
know, whenever they say the gray
10:53
literature, I'm wondering if they just called
10:55
up a bunch of gray hairs
10:57
and no hairs in that. What do
10:59
you look it into? Do you
11:01
have anything in your bottom drawer? But
11:03
really they mean things like asking
11:05
experts, which can have gray hair, but
11:07
sometimes not. And also looking into
11:09
conference publications and those types of things.
11:12
Were the criteria used to select
11:14
articles for inclusion appropriate? Yes,
11:16
they were. Do you think
11:18
the included studies sufficiently valid for the type
11:20
of question that they were asked? Yes,
11:23
I do. Were the
11:25
results similar from study to study? They
11:28
were, although this is
11:30
slightly less relevant in a
11:32
scoping review that aims
11:34
to map where the literature
11:36
is. Yeah, they're sort
11:38
of spreading the net wide and
11:40
trying to capture as much as
11:42
they can within a scoping review.
11:44
So they're not looking for things
11:47
like heterogeneity and stuff. Were there
11:49
any financial conflicts of interest like
11:51
Hashem? Was he getting a whole
11:53
bunch of money from big AI? So
11:56
the project was funded
11:58
by the Canadian Institutes of
12:01
Health Research and several
12:03
of the authors have declared
12:05
their financial interests in
12:07
AI companies or research. But
12:10
as we always say,
12:12
conflicts of interest don't mean
12:14
the research. Shouldn't be
12:16
looked at and thought about
12:18
you just need to know
12:20
about them Yeah, no transparency
12:23
is really really good and
12:25
having conflicts of interest doesn't
12:27
automatically negate any findings We
12:29
just need to put a
12:31
little bit more thought and
12:33
skepticism into evaluating those findings.
12:35
So what did they find
12:37
well? They found over
12:39
5 ,000 records were identified
12:42
electronically, and they drilled it down
12:44
to just over 600 studies that they could
12:46
include in the final analysis. So you can see
12:48
that there's a lot of studies going on
12:50
right now. Publication rates
12:52
have increased significantly from
12:54
2019. Many of
12:56
the studies, 40%, came from North
12:58
America, and of interest, less than
13:01
1 % came from Africa. So
13:04
Kirstie, what was the key result? Despite...
13:07
rapidly increasing volume of studies
13:09
across the breadth of clinical
13:11
applications. Few studies
13:13
describe advanced phases of
13:15
testing or implementation of
13:17
these clinical decision tools.
13:21
So why don't we alternate back
13:23
and forth about their four outcomes
13:25
here? I'll go first. The majority
13:27
of the data came from retrospective
13:29
studies and when I say majority,
13:31
I'm talking 79 % were looking backwards.
13:35
The most common outcome
13:37
category was for prognosis
13:39
at 44 .6%. There were
13:41
only a few high
13:44
quality trials with the
13:46
only randomized trial protocols
13:48
and one quasi -experimental
13:50
study. There were no
13:52
published RCTs done in
13:55
a live ED setting. The
13:58
majority of studies were
14:00
in the preclinical in
14:02
silico phase. Under
14:04
3 % had reached
14:06
clinical implementation or post -market
14:08
surveillance. Oh, we're going
14:10
to have to ask Hashem. What
14:12
does in silico mean? So
14:16
it's basically referring
14:18
to silicon computer
14:20
chips. So you're
14:22
just doing that
14:25
study using data. There's
14:27
no, as opposed
14:29
to in vivo, you're not.
14:32
testing it in live patients.
14:34
So it's not in vivo, it's
14:37
not really in vitro unless of course
14:39
in vitro is considered you know some
14:41
simulation like you're part of the matrix
14:43
is that what in silico means like
14:45
being part of the matrix? In
14:49
a way yeah you're
14:51
just you're zeros and ones
14:53
you know it's just
14:55
pure data it's all it's
14:57
really We can kind
14:59
of think of it as an in
15:01
vitro analog. But
15:04
yeah, it's just basically you're
15:06
running these models or developing
15:08
these models on data and
15:10
not testing it in any
15:12
kind of live clinical setting
15:14
at all. So
15:19
I'm glad we got you in early
15:21
there, Hashem, because this is the part
15:23
of the program where we really love
15:25
having authors on board because we can
15:27
talk. nerdy to you and find out
15:29
a little bit more about your research,
15:31
the depth and maybe stuff that ended
15:33
up on the editor's floor. So
15:36
let's go through five nerdy questions
15:38
and I will start with the
15:40
first one. This is
15:42
a different kind of nerdy. There's lots
15:44
of technical details in this field and
15:46
in your data in particular. Could
15:48
you start by helping us
15:50
with some definitions? Is
15:52
there a difference between
15:54
AI and machine learning?
15:56
And you also mentioned
15:58
supervised versus unsupervised machine
16:00
learning. So is somebody
16:02
watching the machines? Yeah,
16:06
it's a really good question.
16:08
There's a lot of variety in
16:10
the definitions. So I'll
16:12
just try to keep it very simple. Artificial
16:16
intelligence is an umbrella
16:18
term for the ability
16:20
of computers to replicate
16:22
how humans think or
16:24
behave. Machine
16:26
learning is one of
16:28
the processes through which
16:30
computers can achieve this
16:33
intelligence. So machine
16:35
learning is really
16:37
the algorithms that computers
16:39
can use to
16:41
identify patterns or make
16:43
predictions and some
16:45
would say without being
16:47
explicitly programmed so
16:49
they can kind of
16:51
learn on their
16:53
own. And that kind
16:55
of distinguishes it
16:57
from what I would
16:59
call traditional statistical
17:01
models, like most logistic
17:03
regression. That's kind of the
17:05
most common one we use in medicine. Although
17:07
many people would argue that
17:09
logistic regression is also a
17:12
type of machine learning. So
17:14
there's a bit of debate,
17:16
but in general terms, that's
17:18
what machine learning is. It's
17:20
kind of how artificial intelligence
17:22
can be achieved. one
17:24
of the ways. Now
17:26
supervised machine learning means
17:28
the model learns to
17:31
make predictions on data
17:33
that is labeled with
17:35
the outcome and that
17:37
labeling is done by
17:39
humans. So
17:42
a human may say
17:44
the outcome is x,
17:46
y, or z and
17:48
the model then tries
17:50
to predict or identify whether
17:52
it is truly x, y, or
17:55
z and it's told whether it's right
17:57
or wrong in that prediction so
17:59
that when it sees a new data
18:01
set it can get better at
18:03
making that prediction. So that's
18:05
supervised learning. Unsupervised
18:08
machine learning means the model
18:10
learns on unlabeled data. So
18:12
there's no x, y, or
18:14
z. It's not being
18:16
told whether it's right
18:18
or wrong in that prediction,
18:20
but it actually just
18:22
identifies its own patterns and
18:24
group's observations based on
18:26
certain characteristics that it is
18:28
finding itself in the
18:30
data. Most
18:33
machine learning models we're talking
18:35
about, especially in clinical decision
18:37
support, are supervised. But
18:39
there's some really fascinating work
18:41
being done with unsupervised models
18:43
that's coming through the pipeline.
18:46
I can think of one
18:48
unsupervised model, and that's
18:50
usually EM residents or sometimes
18:53
unsupervised models of training.
18:55
But they're not in silico.
18:57
They're in carbon. Nerdie
19:01
point number two is
19:03
on article screening. Your
19:06
initial search found over
19:08
5 ,000 records and you
19:10
screened the full text of
19:12
721. That
19:14
is impressive. Weren't
19:17
you tempted to use AI at any
19:19
point to help with that? I
19:23
was and I'm sure
19:25
my co -reviewers were even
19:27
more so. but I'm very
19:29
grateful for their hard
19:31
work on screening all of
19:33
those articles. Frankly,
19:35
I have limited experience
19:37
with AI research tools
19:39
and I haven't established
19:41
trust with them just
19:43
yet. I'm trying
19:45
them out with varying
19:48
degrees of success. And
19:50
in fact, actually one
19:52
of my co -authors on
19:55
this paper recently developed an
19:57
AI model to streamline
19:59
screening for systematic reviews. And
20:02
he's recently published this
20:04
in, I believe, Annals of
20:06
Medicine. It's fantastic work
20:08
and these models and these
20:10
tools are coming through the
20:12
pipeline. But this
20:14
scoping review was from my
20:16
master's thesis and I really
20:18
just wanted to keep it
20:20
as simple as possible and
20:22
Sometimes simple is more work,
20:24
but at least it's safe.
20:26
So that's why we chose
20:29
to do this all by
20:31
hand The third nerdy point
20:33
is about anglo -centricity You
20:35
excluded papers which didn't have
20:37
either the full text or an
20:39
abstract in English. We get
20:41
this from Figure 1. And
20:43
this was six papers in the end, so
20:45
under 1 % of the papers. And
20:48
despite this, most of the literature came from
20:50
North America, Asia, and Europe. Arguably,
20:52
AI could be more
20:54
potentially beneficial in parts of
20:56
the world like Africa,
20:58
where healthcare resources are more
21:00
stretched. But these aren't
21:03
represented in your data set. Did you
21:05
get a feel from your review of
21:07
whether this might happen or why it's
21:09
happening or isn't happening? Yeah,
21:12
it's a valid critique. I
21:15
can't say the review
21:18
itself will speak directly
21:20
to this, but personally,
21:22
I think there is a
21:24
real risk that AI could
21:26
widen the socioeconomic gaps
21:28
that are prevalent not just
21:30
on a global level but you
21:32
know right here at home. If
21:35
your hospital is not
21:37
using an electronic health record,
21:40
the data and therefore
21:42
the effectiveness of any potential
21:44
AI solution is extremely
21:46
limited. So you know
21:48
obviously there are many
21:50
other priorities for underserved populations
21:53
but as a profession
21:55
and especially as researchers I
21:57
think we need to take
21:59
steps to mitigate worsening the disparities
22:01
that already exist. And
22:04
perhaps AI can be a
22:06
tool used for good, but
22:08
those wielding it need to
22:10
acknowledge that responsibility. So
22:13
while our review can't
22:15
really didn't really dig
22:17
into those disparities and,
22:19
you know, regional ones
22:21
in Africa, for example,
22:24
I think certainly that
22:26
that gap does exist
22:28
and hopefully there's a
22:30
way to address that
22:32
in the near future. Yeah,
22:35
one of the things we
22:37
recognize in this space is
22:39
that biases can be amplified
22:41
using artificial intelligence and we'd
22:44
like to use artificial intelligence.
22:46
as a tool ethically and
22:48
responsibly and not contribute or
22:50
magnify or make biases worse.
22:53
I would rather it be used
22:55
for great good and hopefully break
22:57
down some of those barriers and
22:59
see that everybody can contribute to
23:01
this area of research. Yeah,
23:04
absolutely. And the counterpoint
23:06
that I've heard from
23:08
many AI researchers is
23:10
that, well, these biases
23:12
exist already. And AI
23:14
can actually shine light
23:16
on these biases and
23:18
help us mitigate them.
23:21
There's tools, for example, that
23:24
can help with
23:26
translation for patients coming
23:28
to triage. There
23:31
are tools that show
23:33
that patients who don't speak
23:35
as the native language
23:37
often English. are not getting
23:39
as good prediction accuracy with
23:41
some of the tools that we're
23:43
using. So it helps us
23:45
kind of shine a light into
23:47
these issues and these gaps. And
23:51
then we have to kind of reckon
23:53
with how do we solve that gap?
23:55
The AI tool may not be the
23:57
solution to it, but it may expose
23:59
the gap. And then it's on us
24:01
to figure out ways to stop that
24:03
gap. Like
24:05
most things, it's a tool and
24:07
it's how we use the tool that's
24:09
more important than the tool itself
24:12
necessarily. Agreed.
24:15
Which brings us neatly on
24:17
to nerdy point number
24:19
four, which is about outcomes.
24:22
You found that the
24:25
largest group of tools,
24:27
270 out of 606,
24:29
used the AICDS to
24:32
inform prognosis. And
24:35
as a clinician, this makes
24:37
me wonder, well, so what?
24:40
Okay, knowing the expected clinical course
24:43
of a patient can be
24:45
useful, but I'm not sure I
24:47
need AI to tell me
24:49
that the frail 102 year old
24:51
with kidney failure is unlikely
24:53
to do very well. And
24:56
I have limited scope to
24:58
change that. Did
25:01
you find there was
25:03
any exploration of what the
25:05
AICDS tools added from
25:08
the patient's point of view.
25:11
So given the breadth and
25:13
volume of studies we found,
25:16
we weren't really able to
25:18
delve into whether and to
25:20
what extent patient perspective was
25:22
a part of the tool
25:24
development. But it certainly
25:26
needs to be, as
25:29
with any intervention or therapy
25:31
in. medicine or emergency
25:33
medicine specifically. I
25:35
think the point you raise about this
25:37
100 year old with kidney failure
25:39
is actually even more relevant to have
25:41
the clinicians point of view. How
25:44
is this tool going to help
25:46
me in my work? Where does it
25:48
fit into my clinical workflow? Clinicians
25:50
need to be a part
25:52
of defining the problem. And unfortunately
25:55
this just isn't always the
25:57
case with these AI tools. And
26:00
so we end up with
26:02
a lot of irrelevant studies and
26:04
research waste. And I
26:06
wonder if this is behind
26:08
the main finding of our study
26:10
that so many of these
26:12
models are being developed because data
26:15
is available and it's relatively
26:17
easy to create a model. Get
26:19
a data scientist or a
26:21
computer scientist and build a build
26:23
a nice big model that's
26:25
really accurate. What's not so easy
26:28
is identifying a really important
26:30
clinical problem for which that data
26:32
is available. and from
26:34
which you can create a
26:36
timely effective solution. So
26:38
if a model can tell me
26:40
that, hey, you know, of the
26:42
30 CTAS -3 patients that are
26:45
sitting in your waiting room right
26:47
now, they've been sitting there for
26:49
10, 12 hours, this one
26:51
is going to end up in the
26:53
ICU within 12 hours. Well,
26:55
I think that's actually a
26:57
very useful model, but you need
26:59
those key stakeholders. You need
27:01
physicians, nurses, patients to help identify
27:04
what the problem is and
27:06
what the potential solution can be.
27:09
I'm with you on that one. Personally,
27:13
I develop it even further and
27:15
I'd want my model to tell
27:17
me of those 30 CTAS patients
27:19
in the wedding room, which one
27:21
can I stop going to the
27:23
ICU if I do something about
27:25
it now instead of in two
27:27
hours? And I
27:29
don't think we're quite there yet unless you
27:32
know something about the literature that I don't.
27:35
Yeah, that's a difficult question
27:37
because I think there's
27:39
so much variety in what
27:41
the interventions may be
27:43
for that potentially critically ill
27:45
patient. But
27:47
it's a really good, I think at
27:49
least if we can get the model
27:51
to identify those high risk patients and
27:54
then we can kind of focus
27:56
our attention on them a little bit
27:58
more. Say, hey, this patient's really high
28:00
risk. I'm sure once we identify that
28:02
patient, something will come up. It's like,
28:04
oh, their heart rate's 125. Oh,
28:07
their blood pressure is actually
28:09
like 100 systolic and they're on
28:11
antihypertensives. And so we'll probably
28:14
clinically identify something that the model
28:16
cannot specifically tell us, but
28:18
at least it's shown some light
28:20
on that patient. I
28:22
also think that the
28:24
reverse is very true that
28:26
not just identifying super
28:28
high -risk patients, but a
28:31
model that can valid and
28:33
accurately identify patients that
28:35
are very low risk that
28:37
we don't need to
28:39
put extra resources towards in
28:41
the the next few
28:43
hours would be really helpful,
28:45
especially as emergency departments
28:48
are overcrowded and we have
28:50
staff shortages and resource
28:52
shortages. So knowing how to
28:54
better utilize what we
28:56
have is, I think, a
28:58
major potential boon for
29:00
this kind of technology. I
29:03
am an AI optimist, skeptic
29:06
but optimist and I'm glad you
29:08
brought up the idea of
29:10
hospital crowding because you know you're
29:12
putting all of these resources
29:14
into trying to find the signal
29:16
in the noise who's really
29:18
sick and who's really not sick
29:20
and separate them all out
29:22
and Boy, wouldn't it just be
29:24
better if we had enough
29:26
staffed inpatient beds and were adequately
29:28
staffed in the emergency department
29:30
with natural things rather than artificial
29:32
things? And yet, how
29:34
much is it going to cost to
29:36
do all of this artificial intelligence
29:38
stuff? Which, you know, I'm obviously a
29:40
proponent of. But what about
29:43
the human factor? Why can't we just have
29:45
enough staff? Can't we just hire a couple of
29:47
more nurses? And then
29:49
we don't have to sit around
29:51
going, hmm, I wonder if that
29:53
person's sick or that person's not
29:55
sick. We'd actually have one of
29:57
the best diagnostic tests known to
29:59
medicine. And that is called your
30:01
retina. Look at the patient. But
30:04
anyways, I'll get off that one because it's
30:06
a segue into number five. And
30:08
that's where most of this stuff is
30:10
in the developmental phase. And you
30:12
highlight that the literature is coming from
30:14
these early phases of artificial intelligence
30:17
and clinical decision support tools. You
30:19
know, it's looking at preclinical stuff. It's
30:21
looking at offline validation, which is I
30:23
assume the silico thing, which I'm loving
30:25
that new term. Thank you very much.
30:28
But few tools have undergone
30:30
these large -scale safety and
30:32
effectiveness trials or even
30:34
post -marketing surveillance. So
30:37
we like shiny new things.
30:40
So how do you think regulators, administrators,
30:43
and the EM community in
30:45
general should use this information
30:47
from your scoping review because
30:49
it's like, oh, that's so nice
30:51
and shiny. Me want. Yeah.
30:56
I think they should
30:58
be excited by the potential
31:00
but disappointed by the
31:03
lack of realization. Some
31:05
of these tools have been around
31:07
for over a decade and yet we
31:09
haven't found a way to test
31:11
them or implement them effectively. I
31:14
think that points to a
31:16
need for more research, more
31:18
knowledge translation and more innovation.
31:21
I think that's going to all
31:23
require more funding and better
31:26
use of that funding. It's
31:28
not enough to create that big,
31:30
shiny, highly accurate model if it
31:32
doesn't stand a chance of being
31:34
useful to clinicians or patients. And
31:37
as a researcher, I think we
31:39
need to be more creative. I
31:42
think we need to interface with
31:44
fields like implementation
31:46
science, human factors,
31:48
engineers, healthcare and
31:50
industrial design. We
31:52
need more qualitative research and we
31:54
need more patient and community engagement. Clearly
31:56
the same old approach is not
31:59
working, so I think it's time to
32:01
think outside the box a little
32:03
bit. And most of
32:05
all, I hope the emergency
32:07
clinical community stays cautiously optimistic.
32:10
about AI and how it can
32:12
enhance our work and help us
32:14
take better care of our patients
32:16
and ourselves. But yeah, I think
32:18
there is still a lot of work to be done in
32:20
this field. Hashem, that
32:22
was a great answer because I love
32:24
that you brought it back to patient
32:26
care and that they should be involved
32:28
in the development, participate in the research,
32:31
talk about their preferences and their values
32:33
and really focus on what's important to
32:35
them. And I really like the fact
32:37
that you brought it back. to
32:39
patients because it starts with patient care and
32:41
it ends with patient care and that's why we're
32:43
there. 100%.
32:46
I agree.
32:49
So those were our five nerdy questions,
32:51
but what did we forget to
32:53
ask? Is there anything else you want
32:55
to highlight from your study or
32:57
this area of research? No,
33:00
I think that's great. I'm just
33:02
appreciate the chance to talk about
33:04
it. Am I allowed to do
33:06
a little shout out here? Oh,
33:09
absolutely. Shout out. Okay,
33:12
awesome. So at
33:14
the Canadian Association of
33:16
Emergency Physicians, we have
33:18
a AI special interest
33:20
group. And we're
33:22
actually going to be hosting a
33:24
couple of events at ISEM 2025
33:27
in Montreal this year. So
33:29
it's a pretty exciting
33:31
stuff. So we have a
33:33
panel discussion with several
33:35
international AI experts. And
33:38
we also are doing an
33:40
AI networking event over some cocktails.
33:43
So if any of the listeners
33:45
are interested and want to
33:47
join, please feel free to reach
33:50
out to myself or the
33:52
event organizers. I think we're planning
33:54
to send out a communication as well
33:56
in the coming weeks about when those
33:58
events are going to be. And
34:00
there's tons of our special interest
34:02
group members are going to be
34:04
doing their own presentations on the
34:07
topics that they've been researching. So
34:09
yeah, there's lots of good AI stuff
34:11
at ICEM this year. So please come
34:13
on out. Hashem, is
34:16
this only open to the
34:18
SGM listeners or could maybe one
34:20
of the SGM hosts put
34:22
their name forward to be part
34:24
of the interest group? Yes,
34:27
as long as you're
34:29
a member of CAPE right
34:31
now, you are welcome
34:33
to be a member of
34:35
our AI Special Interest
34:38
Group and we're under the
34:40
umbrella of the Digital
34:42
Emergency Medicine Committee. So
34:44
you can join that and then
34:46
you can join AI Special Interest Group
34:48
from there. Alright,
34:52
well, those are our nerdy questions and
34:54
thanks for giving us a little extra there.
34:57
It's time to comment on the author's
34:59
conclusions and compare them to the SGM's
35:01
conclusions. Straightforward,
35:03
we agree with the author's
35:05
conclusions. Well, how about
35:07
the bottom line? Is it straightforward as well? Yep,
35:11
artificial intelligence -based clinical decision
35:13
support tools in the ED
35:15
show promise, but...
35:17
we need rigorous
35:20
evaluation before they're routinely
35:22
implemented. And how
35:24
about resolving that case that you started?
35:27
You agree with your colleagues that
35:29
this is a rapidly expanding
35:31
field, but your jobs are
35:33
probably safe for another few
35:35
years. And how are
35:37
you going to apply this information clinically? We
35:41
are going to ask what in
35:43
the UK we call the Chief
35:45
Clinical Information Officer. to
35:48
attend the next staff
35:50
meeting to discuss the potential
35:52
benefits and the potential
35:54
harms of AICDS in the
35:57
ED. And on this
35:59
episode, we don't have what would you tell the
36:01
patient because we don't have a patient involved
36:03
in this case. So we'll skip ahead to the
36:05
Keener contest. And
36:09
it's been a few
36:11
weeks. So the last episode's
36:13
winner was our good
36:15
friend from all... way
36:17
over in New Zealand, Stephen
36:19
Stelz. He knew sudden
36:21
onset shortness of breath or
36:24
dyspnea is the most common
36:26
presenting symptom for pulmonary embolism.
36:29
Kirstie, I know you got a hard one this
36:31
week. What's the question? Where
36:35
was the first
36:37
computer that could store
36:39
a program in
36:41
its electronic memory? Well,
36:44
if you know the answer to this
36:46
week's question, then send an email to
36:48
theSGM at gmail .com with Keener in
36:50
the subject line. The first correct answer
36:52
will receive a shout out on the
36:54
next episode. And
36:56
now it is your
36:58
turn, SGMers. What
37:01
do you think of this
37:03
episode on artificial intelligence? X
37:05
previously tweet your
37:08
comments or on Blue
37:10
Sky using the
37:12
hashtag SGM Hop. What
37:15
questions do you
37:17
have for Hashem and
37:20
his team? Ask
37:22
them on the SGM blog and
37:24
the best social media feedback will
37:26
be published in Academic Emergency Medicine.
37:29
Or you could ask ChatGBT for
37:31
what you think about this
37:33
episode or what questions you want
37:35
or any other AI model
37:37
that you want to use and
37:39
then you can send those
37:41
questions to Hashem. Even those might
37:43
get published in Academic Emergency
37:46
Medicine. Well, thank you,
37:48
Kirstie, for doing another SGM hop with me.
37:50
It's been a blast as ever. And
37:53
Hashem, I've learned some stuff today. I
37:55
like this in silico stuff. Yeah,
37:57
you've taught me a few things about artificial
37:59
intelligence. That the fact that CAPE has
38:01
this, you know, a special interest group.
38:03
I'm, as soon as we hang up,
38:05
I'm putting in my application. But I really
38:08
appreciate you coming and sharing your master's
38:10
thesis and the publication in AEM. Thank you
38:12
so much for having me, it's been
38:14
a pleasure. And if you
38:16
could give the SGM tagline in
38:18
your best robotic voice. Remember
38:21
to be skeptical of
38:23
anything you learn even
38:25
if you heard it
38:27
on the skeptics guide
38:29
to emergency medicine. That
38:32
was really nice. Talk to
38:34
everyone next week.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More