Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Forget frequently asked questions. Common sense.
0:02
Common knowledge. Or Google. How about
0:04
advice from a real genius? 95 %
0:06
of people in any profession are
0:08
good enough to be qualified and
0:10
licensed. 5 % go above and
0:12
beyond. They become very good at
0:14
what they do. But only 0 .1
0:16
% are real Jesus. Richard Jacobs
0:18
has made it his life's mission
0:20
to find them for you. He
0:22
hunts down and interviews geniuses in
0:24
every field. Sleep science, cancer, stem
0:26
cells, ketogenic diets, and more. Here
0:29
come the geniuses. is the
0:31
Finding Genius podcast. Hello,
0:38
this is Richard Jacobs with the Finding
0:40
Genius podcast, now part of the
0:42
Finding Genius Foundation. My guest today is
0:44
Maria Gritscher. We're to talk about
0:46
machine vision AI application. Maria is a
0:48
seasoned executive with over 18 years
0:50
of experience in AI driven technology and
0:52
applications. So welcome, Maria. Thanks for
0:55
coming. Thank you. Pleasure to be here. Yeah.
0:57
Tell me a bit about. your background
0:59
with machine vision AI, like, you know, what
1:01
are some really interesting things you've seen
1:03
develop over the past, you know, 18 years?
1:05
Okay, well, of course. So let me start
1:07
a little bit with my background. So
1:09
I've been, I've been working against startups all
1:11
my life. That's the only thing I've been
1:13
doing so far, knowledge of startups. So really,
1:16
it has been very interesting to see
1:18
the development of the technology and like what
1:20
we were busy with, like 15 years ago,
1:22
10 years ago, five years ago, and what
1:24
we're busy with right now. There's sort
1:26
of so much at that. The
1:34
more we live, the faster things change.
1:36
That has been true so far, at least
1:38
from what I have seen. Please
1:41
keep going with your background. What are some of
1:43
the really, I guess, interesting things you worked on,
1:45
and then we'll get to what you're working on
1:47
today. So there are multiple medical
1:49
advancements that are, well, medical
1:52
health has been an important
1:54
field in our lives so
1:56
far. And there are many,
1:58
many different verticals in medical AI
2:00
specifically that are being developed, like
2:02
medical AI or medical technology. My
2:04
favorite, I would say, my favorite
2:07
applications and also that what I
2:09
specialize in is visual AI application
2:11
and what is being developed right
2:13
now in the medical domain is
2:15
absolutely amazing and I would like
2:17
to share it in a little
2:19
bit more detail. But before we
2:21
dive into this, I just want
2:23
to do like a little comparison
2:26
to how things have been done
2:28
so far in our modern world
2:30
and how they are starting to
2:32
be done and what's actually the
2:34
difference that AI, medical AI brings,
2:36
like what makes it so unique,
2:38
what makes it so important and
2:40
incredible. Let's step in. What are
2:43
you working on right now? Okay.
2:45
So well, right now, multiple medical
2:47
AI applications. For example, for
2:49
example, if you look at the
2:51
CT scan. city scans being taken
2:53
to the hospital or x -ray
2:55
or MRI scans like everything that's
2:57
visual so those visual images so
2:59
far needed an expert, an ideologist,
3:01
cardiologist to whoever the specialist is.
3:03
They needed an expert to look
3:05
at them and to understand what's
3:07
going on and basically determine that
3:09
pathology, determine like what's, if there's
3:11
something wrong or if everything is
3:13
okay on this image. Right now,
3:15
all this is being replaced by
3:17
AI. And basically, that's kind of
3:19
looking into what we do. Keymaker,
3:21
we are a building block in
3:23
creating those types of applications. We
3:25
provide training data. We train those
3:27
models, we create custom training data
3:29
for those models, but what I
3:31
would like to elaborate is that
3:33
kind of talk a little bit
3:36
more about is how this actually,
3:38
like how different it is from
3:40
what we knew and we've been
3:42
doing so far. So what's going
3:44
on right now. So for example,
3:46
if we take, if we take
3:48
traditional medicine, how it's been working
3:50
so far. So if we have
3:52
a doctor, for example, doesn't matter,
3:54
like any specialist and the specialist,
3:56
even if it's like the best
3:58
specialist in the world, it's one
4:00
person. has a certain
4:02
number of patients he's seen so
4:04
far. He has a certain
4:06
number of years of education, things
4:08
he experienced in his life.
4:10
So it's good. So let's say
4:12
when we go and see
4:14
the specialist, we get pretty good
4:16
expertise of one person. Now,
4:18
if the same diagnosis, same expertise
4:20
is given by AI, let's
4:22
say on the same can. for
4:24
the same image here, instead
4:27
of one person, even the best
4:29
person in the world, we
4:31
tap into the intelligence and knowledge
4:33
and expertise of the hundreds
4:35
of thousands of different experts all
4:37
put together. So it takes
4:39
the whole medical examinations and understanding
4:41
of the condition to a
4:43
completely different level. So you're not
4:45
just working with the best
4:47
expert in the world. data,
4:52
let's say to identify lung cancer.
4:54
Noyans? So yes, yeah, I understand
4:56
what you mean. So it really
4:58
depends on the model. Usually there
5:00
are different models that try to
5:02
do the same different companies that
5:04
develop those models. So it takes
5:06
like thousands of images that are
5:08
annotated, multiple specialists to train a
5:10
model properly. Now wife and this
5:12
is not a process. Let's just
5:14
do it one time and that's
5:16
it. No, it's ongoing process. It's
5:18
constantly developing, which means that like
5:20
doctor has to go to courses
5:22
and go to seminars and read
5:24
papers to constantly keep his knowledge
5:26
up to date. Same with AI.
5:29
It has to keep learning. I'm
5:31
sure we're learning not from one
5:33
person, we're learning from thousands of
5:35
experts. So what kind of things
5:37
are you saying? What are some
5:39
examples? So for example, brain tumors,
5:41
identification of brain tumors early. of
5:43
brain tumors that is done immediately,
5:45
done with very, very high precision.
5:47
Another, actually, would say one of
5:49
my favorite examples is ultrasound and
5:51
ultrasound in emergency rooms. And what's
5:53
happening here, the response. The
5:55
velocity of the response is very
5:57
important, so identify if there's reflux
5:59
or any injury on ultrasound and identifying
6:02
it fast, super important to save
6:04
lives. So when we have a person
6:06
doing it, there's a delay in
6:08
the response. It also depends on the
6:10
availability of the person. But when
6:12
we have AI doing it, right
6:14
now it's starting to be implemented in
6:17
multiple emergency rooms. It's immediate and
6:19
it's way, way more precise. So here
6:21
we're actually seeing a significant impact.
6:23
measurable impact on AI systems. So I
6:25
mean, what is AI looking for?
6:27
So what is its diagnosis rate compared
6:30
to doctor? So when you're looking
6:32
at an ultrasound, it's that ultrasound, for
6:34
example, it's like a lot of
6:36
things going on. So the AI
6:38
would recognize the same reflux, the same
6:40
internal bleeding as part, let's say
6:42
part of an ultrasound, it would know
6:45
the area right away. So it
6:47
would know to recognize that this
6:49
is internal bleeding with way higher precision
6:51
accuracy and way faster than person.
6:53
Well, what is Lehi or Leitha Esther?
6:55
What is the number? Well, I
6:57
cannot tell you exactly the numbers, how
7:00
like AI, it's immediate, it's it
7:02
looks at ultrasound and it's there. Well,
7:04
for the person, it may take
7:06
a few minutes till they see it
7:08
there. And also, they might miss
7:10
it because there's always Well, what's
7:12
the background efficacy rate? What is the
7:15
rate with the AI of detection?
7:17
Like the false positives or false negatives?
7:19
I cannot tell you that exactly, but
7:22
because we develop the system, we
7:24
don't implement it, but it's significant enough
7:26
that it's proven that AI works
7:28
better. Okay. So why all of a
7:30
sudden, over the past couple of
7:32
years, does AI seem to be a
7:34
lot more advanced? What's happened in
7:36
the field where now you get things
7:39
like catGPT and reasoning modules and
7:41
all that? Why is AI... So
7:43
chat, GPT, I would say that's
7:45
not part of my domain, so I
7:47
will not be able to answer
7:49
that properly, but in terms of visual
7:52
AI, like with... With every technology,
7:54
we hit a point, in a way,
7:56
like a tipping point where we
7:58
know how to train models. We have
8:00
enough data to train models, and
8:02
it just works faster. It just takes
8:04
more training data, better performing models,
8:06
and it just works. That's technology. Are
8:08
you seeing that in your machine
8:10
learning, or what are you seeing? Yeah,
8:12
well, projects that we are working
8:14
with is machine vision AI only, so
8:16
it's only visual projects. We see
8:18
it a lot with visual projects. like
8:20
before, well, also when you develop
8:23
a model, regardless what the model is,
8:25
you develop it once and then
8:27
you retrain it. So the older the
8:29
model is, the more training models
8:31
went through, the better it gets. So
8:33
we already have like a few
8:35
years of training models and creating models.
8:37
So the models just get smarter.
8:39
It's like a brain, like human brain
8:41
that learns more and more with
8:43
time and doesn't forget. It does not
8:45
forget what happens before. It just
8:47
gets better. on,
8:52
can you tell if there is
8:54
such a thing? What do you mean?
8:56
A drift in the data. I
8:58
don't know, maybe for some reason, you
9:01
know, internal bleeding around the stomach
9:03
now is showing up differently than it
9:05
did years ago. Maybe because people's
9:07
health have changed or they're average. What
9:09
if there's a drift in the
9:11
data? So this is something that's very,
9:13
very interesting question. This is we
9:15
call it bias in the data. It
9:18
happens. There's always a bias. And
9:20
that's why we always need human input
9:22
and we always need to retrain
9:24
those models and validate that they're working
9:26
right. So let's say why we
9:28
still have human input and it's not
9:30
just 100 % automated and we stopped
9:33
training right now. How do you
9:35
identify bias? What's an example of some
9:37
bias that you've seen in your
9:39
models? We asked to figure out where
9:41
it came from. For example, let's
9:43
take a simple example. Let's look at
9:45
blood cancer. You have the cancerous
9:48
cells and sometimes regular cells. The model
9:50
can recognize regular cells as cancerous
9:52
cells by mistake. This can happen. Here
9:54
it's very important that we still
9:56
have expert input and we do validate
9:58
the model and make sure that
10:00
those false positives are marked. as such,
10:02
and then we retrain the model
10:05
on eliminating those false positives. Please
10:29
visit finiegeniuspodcast .com and click on
10:31
support us. We have three levels
10:33
of membership from $10 to $49
10:35
a month, including perks such as
10:37
the ability to see ahead in
10:40
our interview calendar and ask questions
10:42
of upcoming guests, transcripts of
10:44
podcasts you're interested in, the ability
10:46
to request specific topics or guests,
10:48
and more. Visit finiegeniuspodcast .com
10:50
and click support us today. Now
10:52
back to the show. Okay, I
10:54
mean, how many of the scans
10:56
do you look at manually to
10:58
see if it's a false positive?
11:00
Is it every 201 or is
11:02
it only? No, no, no, no,
11:04
it's usually there is ongoing process
11:06
of validation going on. So the
11:08
experts always look the data science
11:10
team usually always looks at the
11:12
outputs of the models or certain
11:14
subsets of outputs of the models
11:16
and validates them if they're correct. What
11:20
percentage of scans, for instance, for internal
11:22
bleeding or looked at manually, statistically what will
11:24
be enough? 1%, 10 %? So it would
11:26
depend on the model and it would
11:28
depend on specific use case, but it can
11:30
be anything from 10 % to if the
11:33
model is performing well, it can be
11:35
less than 1%. Okay. So again, you've gotten
11:37
it to a very high efficacy. Do
11:39
you see it getting better and better still
11:41
or? It's always going to get better
11:43
and better. It is. It's always going to
11:45
get better. I think I believe so.
11:48
I believe so, at least so far, because...
11:50
Think of it, there are always edge
11:52
cases, there are always new things that
11:54
are coming up. There's also
11:56
overtraining. I mean, there's getting stuck in
11:58
a localized minimum or maximum. you have
12:00
a speaker, it tends to dominate what's seen
12:02
in the visual field there. I mean,
12:04
again, I know there's overtraining. So how do
12:06
you make sure the model doesn't wander
12:08
into that territory? How do you tear
12:10
the weight or how do you zero
12:13
it out or make sure it's resetted? So
12:15
when the models are trained, We
12:18
start training a model, we feed
12:20
it a certain number of scans. After
12:22
some time, we need less and
12:24
less training, but we do need training
12:26
for each case or new things
12:28
that come up. So I would say
12:30
never. What
12:33
do you do? So that's where we
12:35
need a human input and then we
12:37
teach the model as we would like
12:39
as an expert in this field will
12:41
do. They would learn what it is
12:43
and explore. Basically create the training data
12:45
that outlines this edge case to the
12:48
model. What's the example of it? For
12:50
example, new types of tumors, for example,
12:52
or if you're going to the same
12:54
ultrasound where the, let's say, the organs
12:56
we're looking at look different, completely different,
12:58
or it's a person with results that
13:00
haven't been seen before. In any type
13:02
of data, H -cases happen, and that's
13:04
the thing about H -cases. We can't predict
13:06
them, they just happen, and when they
13:08
happen, then we deal with them. You
13:10
don't just delete them, I would think
13:12
you would learn from them, but do
13:14
you not include them in the training
13:16
data, or what do you do with
13:18
them? Now we have to include
13:21
them. That's how. Yeah. So in this
13:23
case, we identify it as an edge case
13:25
or the model fails. Like the model
13:27
basically says, okay, I don't know what to
13:29
do with this. This looks different. Then
13:31
we have experts, let's say radiologists or a
13:33
number of experts that look at those
13:35
edge cases and they decide what it is
13:37
and market as what they decided that
13:39
it is. And we use that as a
13:41
training data for the model. Okay. Our
13:43
edge case is particularly useful for the interest.
13:46
are extremely useful, they're essential to
13:48
keep the models up to date.
13:50
Why? How are they useful? Well,
13:52
if you encounter something new, you
13:55
want the models to learn that
13:57
something new exists. And if they
13:59
encounter this again, they will know
14:01
what to do with it. So
14:03
what kind of education would you get
14:06
with internal bleeding, for instance? Well, a
14:08
kind of hard to answer this question
14:10
that kid just doesn't look like. It's
14:13
really a specific, but let's say it
14:15
doesn't look like it's a bleeding, but
14:17
it is or vice versa, like false
14:19
negative or false positive. It can go
14:21
both ways. Okay. So that means you
14:23
edge cases tend to support. false positives
14:25
or false negatives or random.
14:28
It really depends. There is no
14:30
rule that can be anything. That's the
14:32
whole idea for edge cases. You
14:34
never know what it is, but when
14:37
you encounter model fails, then we
14:39
have to train the model on those
14:41
edge cases as well. But it
14:43
can be. There's a skew in edge
14:45
cases. They're not symmetrical around the
14:47
main data. Maybe that would
14:49
tell you something. Long tail. Yes,
14:51
but again, so it's important for
14:54
model change. Okay, so you use
14:56
AIs that detect internal bleeding when
14:58
someone comes into the hospital. What
15:00
else are they being used for,
15:02
the vision sense? There are actually
15:04
so many applications. It starts from
15:06
recognizing pathology on the scans, like
15:08
any type of scans, to cameras
15:10
in hospitals or cameras. elderly
15:12
homes that ensure that the people are
15:14
well, like they recognize that if it's
15:17
an elderly home, the camera, the smart
15:19
camera, of course, there's a full privacy
15:21
to it, but the smart camera would
15:23
be able to see if the person,
15:25
say the elderly person the house, having
15:27
an issue or they're having a heart
15:29
attack or it fell down, basically motion
15:32
recognition as well, or they need help.
15:34
So instead of basically a person to
15:36
press a button, call for help, the
15:38
camera would recognize it right away and
15:40
call the ambulance, call the sport. So
15:42
it's everything. It's really helping to, it's
15:44
really helping to, if you're looking specifically
15:47
at elderly care, it's really helping to
15:49
improve their life and save lives of
15:51
elderly people because instead of... How do
15:53
you know? Is it in use or
15:55
is it still being said? It's in
15:57
use. Yeah, yeah, it's been in use.
15:59
It's been in use actually for quite
16:01
a while. It's getting better and better,
16:04
especially for emotion recognition or action recognition. It's
16:06
usually autonomous cameras that are deployed at
16:08
people's houses, multiple companies, let's say multiple...
16:11
service providers that do that, but the
16:13
idea is this camera fully autonomous that
16:15
can recognize if the person's having like
16:17
a heart attack or stroke or anything
16:19
is wrong with them. And then it
16:21
calls for help right away. Where are
16:23
these? Only in hospitals or? You have
16:25
them in hospitals as well. You have
16:27
them in private homes. This is like
16:30
a service that you have a security
16:32
camera in a home. It's a choice
16:34
of, it's like people's choice to use
16:36
the services or not. Is that people
16:38
just having in their home? Yeah, some
16:40
people do. This technology has been around
16:42
for a few years already. Of course,
16:44
it's getting better and better. But
16:47
more and more people choose to
16:49
use that in their home. It's like
16:51
having a smart camera facing outside,
16:53
detecting motion, but the same thing, but
16:55
more sophisticated, fully private inside. Why
16:57
companies reduce this? Where can people get
16:59
it, for instance? That's a
17:01
good question. So I will not
17:03
be able to answer that. But
17:06
I'm sure if you search for
17:08
like even charge PT or Google
17:10
cameras, smart cameras for elderly care,
17:12
there are multiple companies that do
17:14
that. Now, we like keymaker, we
17:16
are a service provider for creating
17:18
the training data. So we will
17:20
be on the other end of
17:23
those cameras. We will be helping
17:25
to develop and train those models,
17:27
but we don't sell them. So
17:29
like kind of going up the
17:31
development chain, we would be like
17:33
the building block of creating this
17:35
AI and then multiple companies, multiple
17:37
camera companies or healthcare companies that
17:39
would purchase this AI as a
17:42
service together with the camera if
17:44
needed. deployed under their brand. Many
17:46
other interesting examples of work that
17:48
you've done? No, they use this
17:50
AI to diagnose. Any other examples?
17:53
There are so many. Let me
17:55
think for a second. What else
17:57
is really interesting? I
17:59
would say of the examples
18:01
I personally really like is goes
18:03
back to what I mentioned before
18:05
is x -ray or MRI recognition. If
18:07
there's any issue with a broken
18:09
bone or internal bleeding or tumors,
18:11
etc. But the way it's used
18:13
is in remote locations. So you
18:15
have, let's say, an x -ray
18:18
scanner in a remote location, either
18:20
in places that are just, say,
18:22
far away. There's no hospitals there,
18:24
no doctors there, but there's the
18:26
Khalil station or the medics station
18:28
with those, with the scanning device.
18:30
And instead of having waiting for
18:32
a doctor or transporting yourself to
18:34
a hospital, the diagnosis can be
18:36
done right away at the spot.
18:38
So it truly enables fast and
18:40
cheap healthcare in a way in
18:42
remote location. Now, the most, just
18:44
so far, like the most important
18:46
part of the health, like of
18:48
helping somebody is understanding what's wrong.
18:50
And here, we immediately understand what's
18:52
wrong and choose, even remotely, choose
18:54
the appropriate treatment for that person.
18:56
So it really helps to help
18:58
people to get proper health care
19:01
in the remote or locations first.
19:03
Nothing's near there, no hospitals, no
19:05
anything. Okay. What's the best way
19:07
for people to keep tabs on
19:09
your work? Where should they go?
19:11
Well, again, like chemo care, we're
19:13
not at the end of... creator
19:15
with just a building block and
19:17
creating those models. So nothing really,
19:19
but there's also news and things,
19:21
new things come up and healthcare,
19:23
so just general stuff. Okay, well,
19:25
very good. Well, thank you for
19:27
coming on the podcast and explaining
19:29
it. I really appreciate it, Maria.
19:31
Thank you. If you like this
19:33
podcast, please click the link in the
19:35
description to subscribe and review us on
19:37
iTunes. You've
19:42
been listening to the Finding
19:44
Genius podcast with Richard Jacobs. If
19:47
you like what you hear, be
19:49
sure to review and subscribe to
19:51
the Finding Genius podcast on iTunes
19:53
or wherever you listen to podcasts.
19:55
And want to be smarter than
19:57
everybody else? Become a premium member
19:59
at .VendingGeniusPodcast .com. This podcast is for
20:01
information only. No advice of any
20:03
kind is being given. Any action
20:06
you take or don't take as
20:08
a result of listening is your
20:10
sole responsibility. Consult professionals when advice
20:12
is needed.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More