Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Welcome to the prompt engineering podcast,
0:03
where we teach you the art of writing effective
0:05
prompts for AI systems like chat,
0:07
GPT, mid journey, Dolly,
0:10
and more. Here's your host, Greg
0:12
Schwartz.
0:15
Welcome to a joint episode
0:17
of the
0:17
Prompt Engineering podcast and the How to
0:20
Talk AI podcast.
0:21
We've got some awesome guests, so go ahead
0:23
and introduce
0:24
yourselves, guys.
0:25
Yeah, I am Aaron, er, the co-founder
0:27
and c e o of Vital.
0:30
And I'm Felix Brand at
0:31
Vice President of Data Science.
0:34
And they have a terrific product
0:36
that they have a launch. Today we're gonna hear all
0:38
about it. I think it's something that would resonate
0:40
with everyone and anyone that's
0:43
been to the doctor and had questions about
0:45
what, what was being told
0:46
to them. Yes. I
0:47
already tested it after watching your talk. Cool.
0:50
Really? Yeah. So I have sleep apnea. Yeah.
0:52
I put in a long diagnosis
0:54
with a bunch of stuff that I'm like, okay, I think I know
0:57
what that is. Yeah. I don't know what the hell that is. Yeah.
0:59
And it was like, sleep apnea, obstructive.
1:02
Yeah.
1:02
And two other things. Yeah. Oh.
1:04
Fantastic. Okay. That's great. I,
1:06
I think like you said what person
1:08
hasn't seen a whole long
1:11
list of doctor's notes or even been
1:14
in a situation where you're maybe an
1:16
inpatient in the hospital and then the
1:19
doctor on rounds is coming by and telling you something
1:21
in a million miles a minute because he's got 20 other people to
1:23
see. But it's probably important
1:25
because it affects your own health and being and like,
1:28
you're Probably already out of it anyway, because you're
1:30
in the hospital. What a terrific way
1:32
to,
1:32
Provide something. Doctor's notes are really almost
1:35
like a foreign language. Yeah. As
1:37
I said in my talk, doctors don't say
1:40
nosebleed, they say epistasis. They
1:42
don't say, hey, your mom has had a stroke.
1:45
They say, oh, she's had a cerebral infarction.
1:48
They use all of these abbreviations. It's almost
1:50
impossible to... Understand.
1:52
And so we use the large language
1:55
model, As the core of
1:57
what we call our Doctor to Patient Translator,
2:00
and it's at vital. io slash
2:02
translate. It's free to the public, available
2:04
worldwide, literally as of today.
2:06
You're just catching me at a good time. And
2:10
we're happy to tell you, a bit about the
2:12
prompts, the classifiers, the free parts,
2:14
and all the things that we offer. To
2:17
make that possible, technically.
2:19
Yeah, that would be great. I would love to I would love to
2:21
delve into some of the the technical aspects.
2:23
Maybe this is a better question for Felix. Could you tell us a little
2:25
bit about how the model was going to be trained and
2:27
what data was used to be able to produce these
2:29
great
2:29
completions? Sure.
2:31
We've tried a number of different prompts because there are actually
2:33
a lot of different types of doctor's notes. And with
2:35
the public facing stuff, we know that we're going to get the
2:37
whole gamut from imaging all the way to discharging
2:45
stuff. We People, when they get their paper discharge instructions
2:48
upwards of 90% of them chuck them straight in
2:50
the bin as soon as they leave the hospital. And the
2:53
literature people understand their care, and
2:55
understand the follow up instructions
2:57
the doctors are giving them, their post care situation is way
3:00
lower. So we've looked
3:02
at different prompts for different situations
3:04
and then built a pre
3:06
model classifier, a pre LLM classifier,
3:08
also using a language model, a
3:12
small one deciding which of our various prompts should you have
3:15
to write some nodes and then we have
3:17
a whole bunch of post parsing, it comes out, we
3:20
take sections out of translation,
3:22
we plug those sections of the website,
3:25
maybe when you saw it, you could see that you
3:27
get like a very brief summary. And then
3:29
also a much more sort of technical breakdown.
3:32
Yes. So we're getting the LLM to pull out a lot of information
3:35
about what's in your justice note, but we want
3:37
to show you in like a digestible
3:39
summary first. Yeah. I think an important
3:41
piece of context is a lot of these doctors notes,
3:43
they're 10 or 15 pages long,
3:46
and they have 80% boilerplate.
3:49
Yeah. They have a, Hey, don't smoke. Or
3:52
I don't. Hey, here's COVID education. Okay. You're two years out
3:54
of date. And they put a lot of filler
3:56
in there. And this is actually just a fraction
3:59
of our primary business. Our primary business is
4:01
patient experience offer. It guides you through
4:04
an ER visit, or if you have to stay
4:06
overnight in the hospital, it explains your lab results,
4:08
how long you're going to wait. And then your
4:11
notes. Yeah. And because
4:13
we have experience with a million patients
4:16
a year using it, we know the structure of notes.
4:19
from all over the country. And so
4:21
we can pre parse, and instead
4:23
of a 10 or 15 page, we can
4:25
get it down to actually we only need to pass 3
4:27
or 4 pages into the LLM. That's
4:30
an important business and engineering consideration,
4:33
because cost and speed.
4:36
Also context window. If
4:38
you're doing, especially if you're using few shot
4:40
training with an LLM, which is a good
4:43
idea so that you know what output
4:45
you want to get. You'll blow through
4:48
your prompt, your future
4:50
shot, your data, and then your output, it has to fit into
4:52
a 4k window or a 16k window.
4:56
And so you need to do a few things to give
4:58
yourself as much profit as
5:00
possible. That makes
5:02
complete sense, but having the almost
5:05
sub prompts acting like little sub
5:07
agents themselves trained to
5:10
say just get rid of all the boilerplate
5:12
stuff that's not unique to
5:15
that patient's differential diagnosis.
5:17
Exactly. So deciding which part you're
5:19
going to do... Are more or less with
5:21
your own code or your own classifiers,
5:24
and then how much to send, especially if you're using a, like
5:26
a commercial l m. And we've used
5:28
both. Felix's got, Lama up and running
5:31
and too Yeah. Med
5:33
Palm lm, which is
5:35
medical specific, obviously. The
5:37
open ai, we can't actually use open
5:39
AI directly. You have to use it through Azure because
5:42
you. You need this to be hit. We're
5:45
in a regulated industry. Open AI will
5:48
not sign all of those things. You actually have
5:50
to like Work your way through Corporate
5:52
Microsoft. Yep, they'll determine
5:55
whether you're a worthwhile person or not, and
5:57
whether they're willing to take the risk, and
6:01
so if you put all of it, you can, with a sophisticated
6:03
prompt, put it all through WebMGT. You
6:06
can say, classify this. Is this a discharge
6:08
report? Is this a physical therapy
6:10
report? Or is this a hostile input?
6:13
By the way, you should always protect against hostile input.
6:15
Is this a non English input? Is this something
6:18
else entirely? So you want, and then...
6:21
In your prompt, you can say based on the classification,
6:23
then do this. But if you do
6:25
all that, your prompt starts to get very complicated
6:27
and very big. You can use that to prototype,
6:30
but when you go into production, this
6:32
is also very slow, it gets very expensive, you
6:34
run a classifier that's much simpler and
6:37
much quicker on top of it, and
6:39
then you don't have the expense, your prompt's shorter.
6:41
And then you can say, if it's this, go to this prompt.
6:43
If it's that, go to that prompt. You can
6:45
also templatize prompts. So if you
6:47
say, I want the output in Spanish,
6:50
you can put a variable in your prompt. So the
6:52
prompts, don't think of them as static strings.
6:55
Think of them as a programming language
6:57
that is frankly pseudocode, yeah?
7:00
One of the things that, this is a bit like medical
7:02
specific, but the part that's very important
7:04
to patients is the plan and assessment,
7:07
what the
7:07
doctor says you're supposed to do. Here's the
7:09
problem. In some hospitals it's called plan and assessment. In other
7:11
hospitals it's called assessment. In other hospitals it's called
7:15
plan. In other hospitals it's got like an abbreviation.
7:17
And with classic programming if I say
7:19
match panda and I give it pandas
7:22
with a plural, it's no
7:23
match. Or you got a space in your column
7:25
header. Exactly. But with an
7:27
L M, I can just be like, it's gonna be called
7:29
this, or probably this. It's got stuff
7:31
that kind of looks like this and like
7:33
it's good enough that if I explain it to you guys, you'd be
7:35
like, oh, okay. I know what you're looking for. That's
7:38
the power of LLMs is
7:40
you can give them. Vague pseudocode.
7:42
Yeah, and to me, that's mind blowing.
7:45
This guy actually knows a map of how that's
7:47
passed. So
7:48
real quick before we get into that, just for the
7:51
audience, part of what I do on my
7:53
podcast is like, What are all these
7:55
technical terms? Content window, number one.
7:57
It's literally how much stuff you're putting into the prompt,
7:59
but also how much it's filling out, and
8:02
if you do too much, it forgets the stuff outside
8:04
the prompt window. Sorry, the context window.
8:07
And so you have to be careful how long everything
8:09
is. That's what they're talking about when you're saying,
8:11
if I can pull pieces of the prompt out and only run
8:13
them separately, it's way better.
8:46
It's a key reason to innovate in your own models, because
8:49
for a long time you've been working with 4K
8:51
context window, and if you're doing
8:53
this few shot in context learning, as
8:56
Aaron says, you just run through it.
8:57
Yeah. And also, I'm
8:59
the CEO as well as, maybe
9:01
you can tell I have a bit of an engineering background, not as
9:04
good as this guy. I don't have the British
9:06
accent, which is, that's true. And
9:08
also, that adds
9:09
20 IQ points, right? Yes.
9:12
But as the CEO, I have to think through the economics,
9:15
right? If you were using GPT 4 and
9:17
you give it the 16K 32K window,
9:21
the maximum one, it's going to cost you,
9:23
if you fully fill that thing, it's going to cost
9:25
you about 48 cents per, translation
9:28
or transformation, right? Yeah. We have
9:30
a million patients on our platform. They have
9:32
about five nodes each. You do
9:34
the math on that and you're spending
9:36
5, 000 a day.
9:38
Yeah. If that's what you do. You don't need
9:40
to. You use smaller context windows,
9:43
or you use 3. 5 Turbo, or you run
9:45
Llama. Yeah. Or you use one
9:47
LLM to pre parse for a different LLM.
9:50
You can do, those are the tricks that like, practically
9:53
speaking, this is an immature
9:55
industry because you have to hand do
9:58
All of that.
9:59
And what's really interesting is, some of these problems
10:01
are really exciting and new. As Aaron says,
10:03
you're trying to pull out something that's
10:05
very undefined in free text document.
10:08
Okay. So that's you need some modern stuff to
10:10
do that. But some of these problems are pretty traditional.
10:12
Classifying a document and you've got, plenty
10:15
of examples. You don't need to go and use
10:17
your OpenAI LLM to do this
10:19
classification problem. We've been doing this for a long time. And
10:21
you can do them a lot cheaper.
10:22
Yeah, it's slow and expensive to use OpenAI,
10:25
or Google, or
10:28
NetApp for basic classifications. But it's
10:30
great for prototyping. So the key
10:32
insight, is work out the piece that you really
10:34
need the expensive tech for, and ensure
10:36
that you boil down the problem only to that, using
10:39
other pieces of technology
10:40
upstream. Yeah. So how do you handle,
10:42
Like the, if you have all these
10:44
prompts essentially acting as agents, and
10:47
you have to have this sequence occur. In
10:49
a specific order, how do you asynchronously
10:52
is there a specific layer that's doing the
10:54
handoff? Are they doing the, are they doing
10:56
a turnover at rounds? In between
10:59
synchronization, let
11:01
me get a little technical. So we use an
11:03
event. sourced architecture. So this
11:05
is outside of AI, which basically
11:07
means that we handle streaming data quite well. So
11:09
we have data that's streaming from
11:12
over 100 hospitals now, more or less
11:14
real time. It comes out of Cerner, Epic, whatever
11:16
the electronic medical record system is.
11:18
So a doctor writes a new note, finishes it, it hits
11:20
our system and
11:23
goes on to the parsed,
11:26
classified, cut up into little bits, and
11:28
then divvied out to the yeah,
11:31
you need to synchronize it so you have queues of
11:33
work. Those queues can back up. We just
11:36
launched this.
11:37
Unfortunately at this point, we had some
11:39
audio challenges. So the video will
11:42
continue. But going
11:44
forward, we're only able to use audio
11:46
from a much lower quality source.
11:49
So it's going to get kind of noisy from here.
11:51
I'm sorry about that. The rest of the interview
11:53
is definitely very interesting. But
11:56
it was a pretty noisy room.
11:58
I've been so busy with talking to people. For all
12:00
I know, the system is, got an hour wait
12:02
queue back up. But it won't
12:03
fall over. It will just queue up. It
12:06
took two tries, and it was about 45 seconds,
12:08
but it worked! That
12:09
means, eventually, that's actually, I'm
12:11
like, happy to hear that, not from your
12:13
experience, but it means that we're putting serious
12:16
load on this. It means that people are, this
12:18
is a good day in the history of Python. But
12:21
you have to have a robust architecture to handle
12:23
that and not get things out of order and handle server
12:25
restarts and all of that,
12:28
so that's a, it's a pretty
12:30
engineering response, but yeah, it
12:32
can be
12:32
handled. And to speak to Aaron's answer
12:35
earlier, this is something new that we're doing, but
12:37
we have, what, a good four products at
12:39
the moment? Yes. We have a patient experience
12:41
product, which is going to guide your experience through
12:43
the emergency room. Yeah. And we're doing a bunch of AI
12:46
there. We're predicting, how long are you going to wait for a bed?
12:48
How long are you going to wait until a doctor comes and sees
12:50
you? Yeah. What are the lab results that you're, that
12:52
are coming back, what do they really mean? for you. We've
12:55
got a product for care teams. We're
12:57
providing clinical decision support alerting.
13:00
Are you likely to get sepsis at some point in
13:03
your stay? How likely are you to be admitted? Like allowing
13:05
doctors to manage their workflows
13:08
using this kind of alerting system. We've got a
13:10
system which allows you to find follow
13:12
up care afterwards. And so basically,
13:15
we've been doing this for a long time. We've been doing
13:17
it, what are we, like six years now? Six years, yeah.
13:19
Yeah, we and we've been dealing with this huge pipe
13:21
of patient data for a long time. We're not new to this.
13:23
The event sourcing stuff, that's not for the LLM stuff. That's
13:26
running our systems. That's running our systems at a hundred
13:28
hospitals, a million patient visits. That's, that
13:30
stuff has been the
13:31
easy part for sure. That's right. So if, if this
13:33
sounds foreign or if you don't have a system like
13:35
that with the robust retry mechanism it'll
13:38
take you a couple of years of engineering to get to
13:40
that solid
13:42
system. That's some getting your hands dirty, just in
13:44
the mud. Yeah. Noting and debugging
13:46
just to get there.
13:47
Medical data, the messiest data I've
13:49
worked with
13:50
so far. That's a great, that's a great segue maybe into
13:53
can you tell us a little bit about the process that you
13:56
had to go through to have an LLM,
13:58
Handling HIPAA, HIPAA secure
14:01
patient data. Yeah. I know this is a big fear that
14:03
a lot of enterprise customers have. We
14:05
don't want our trade secrets to get out there.
14:07
We have legal,
14:10
proprietary, interactions with our clients.
14:13
Yeah. We're in a regulated industry, right?
14:15
This is, fortunately
14:18
or unfortunately, not new to me. I
14:20
was the founder of a company called Mint.
14:23
com. We took, The usernames
14:25
and passwords for 25 million
14:27
people and a hundred million bank accounts, including
14:30
me. Yeah, it was a long time ago. Including me!
14:32
That's right.
14:33
No. Yeah,
14:34
and have never had a security breach. At
14:37
least to my knowledge. I sold the company about
14:39
a decade ago. So we're used to dealing
14:41
with sensitive information. You
14:43
want outside penetration testing
14:46
outside audits. HIPAA and HITRUST
14:48
is even more thorough, is
14:51
routine outside security audits. Honestly,
14:54
it can sometimes be a pain to log into our own
14:56
systems requires multi factor fingerprints
14:59
and a drop of blood, but it
15:01
is very secure. You can
15:07
You cannot do this with
15:10
OpenAI. You have to go with,
15:13
Google will sign what's known as a
15:15
BAA, a Business Associates Agreement.
15:18
And it's part of the medical
15:21
chain of liability that says, hey,
15:23
we have the right insurance. If
15:25
we mess up, we have to legally report
15:27
it to you, and you have to report it back to the health
15:29
system. Here's our security practices,
15:32
and we have to look at those, and we have a whole
15:34
compliance office. To do all
15:36
of this.
15:37
And so you actually can't go in some
15:39
sense with the l and m startups.
15:41
Yeah. Microsoft Azure is a fantastic
15:43
choice to start out with. Google's
15:46
been aggressive once they saw what we were doing.
15:48
'cause this has been this has been out internally in our
15:50
products for two or three months.
15:53
And yeah, but they're also Google and Microsoft.
15:55
They know what they're doing when it comes to. security.
15:57
Honestly, when it comes to medical
16:00
information, it's all the people
16:02
who are still running local servers with.
16:05
Yeah, that's it. You want
16:07
to know why they have so many like
16:10
Malware attacks. They're on an old version
16:13
of Windows. They don't patch their stuff. And,
16:15
they may or may not be the,
16:17
the best IT people in the business. I absolutely
16:20
trust the security of AWS
16:22
and Microsoft and Google. Because they have too much
16:25
to lose as companies. We
16:27
have a super secure system.
16:29
And we trial it all
16:31
the time.
16:32
And obviously, our BAA includes
16:35
none of our data being used for training.
16:38
Of course. Yeah. Nice.
16:40
Speaking of the patient experience,
16:42
right? Yeah. If, is it a bespoke
16:45
interaction each time I log onto the app?
16:47
Yeah. Or does it keep my health
16:49
record, so to speak, so I can refer back
16:51
to the last time I used it? And
16:53
then, is that stored locally on
16:56
my device, or is it used, in any
16:58
sort of... process to make
16:59
the tool better. So our primary business
17:01
is a tool that guides you through your visit
17:04
at an ER or inpatient. And
17:06
that is visit based. So we know what your health
17:08
history is and we might show you a little bit of your past
17:11
visit, but it's meant to use at the time
17:13
that you're at the hospital or the emergency
17:15
room or having surgery or something like that.
17:18
And it's just walking you through that experience and
17:20
understand your lab results. These are the videos
17:22
you should watch so you can understand it. These are
17:24
the medications and what you need to know about
17:26
the side effects. We give you
17:28
access to that data for the couple
17:31
weeks following your visit, but we always
17:33
hand you off to the patient at least
17:35
for now. And I will be tight
17:37
lipped about whether you will ever have a full health
17:39
history. IE,
17:42
I've been pitched probably a dozen times on,
17:44
we're the mint for healthcare. And I was like, I
17:46
could do the mint. It's
17:49
vaguely familiar as a
17:50
business concept. I've done this before. So
17:53
nothing to announce today, it's in the back of my mind.
17:55
I'm sure people would, of
17:58
course it would resonate with
17:59
someone to be able to query. years
18:01
and years of interactions, and not to mention
18:03
the opportunities that if you
18:06
apply some machine learning over top of some
18:08
of that diagnostic opportunities
18:10
to catch
18:10
things early. Now it's like you're inside what
18:13
my long term business vision is. Theoretically,
18:15
I could calculate your health future
18:18
if I had a big enough data set. Yep. And
18:20
keep in mind that I... At
18:24
Vital, we now see 2% of all U.
18:26
S. Emergency medicines. Wow! For
18:28
a startup that's been around
18:30
for not that long, that's a pretty
18:32
good sample size. We can see
18:35
how diseases progress and,
18:37
that there's more of this type of fall in the winter than
18:39
there is in the summer, right?
18:42
I know there's entire industries, like the health
18:44
insurance
18:45
industry
18:46
that has it modeled on curves
18:48
exactly when you're gonna die based on, the
18:50
fact that you went skydiving once when
18:52
you
18:52
were 31. Sure. Yeah. I
18:54
could probably predict whether, Bird
18:56
and Lime are doing business well based
18:59
on the number of elbow injuries and wrist
19:01
fractures that we can
19:03
plot over time. That's unfortunately
19:06
not a joke. Wow.
19:07
Okay, then I have to ask, since Google
19:10
got rid of the, I forget what they called it, but the
19:12
flu predictor feature that they had for so long?
19:15
Is that something you guys might potentially product?
19:18
No, we won't use that
19:20
sort of stuff. It's really interesting and we probably could do
19:22
it internally. But And we did come
19:24
up with a COVID checker. We did come up with a COVID
19:26
checker that was used a million and a half times. Wow. Yeah.
19:28
We were the first one out before
19:31
Google, before Microsoft. We were, CDC
19:33
considered using us. I was literally on the phone
19:35
with the White House Task Force in the middle of the night developing
19:38
this thing. A million and a half uses
19:41
within the first month. We did the COVID checking for the state
19:43
of
19:43
Oregon, right? Yeah. The whole state. We
19:45
pivoted the whole company as soon as the pandemic started.
19:48
Yeah. Said, okay, we've got all this health data coming
19:50
in. We've got the data science chart. Let's try
19:52
and do something quickly with that. Yeah, nice.
19:54
But the sort of north star for
19:56
the company is what's right for the patient.
19:58
will it improve patient outcomes? I'm
20:01
really tired of most of healthcare.
20:03
I'm looking at you. Medicare Advantage. Who
20:06
is, frankly, just financial arbitrage.
20:09
They're basically like, Okay, so the government says
20:11
New York's a more expensive place. We'll pay 1, 400
20:14
a month for somebody over 65. Phoenix
20:17
is cheaper, so we'll pay you 1, 100.
20:20
And Medicare Advantage companies are just like, You
20:22
know what, we'll advertise in rich zip
20:24
codes to get healthy, wealthy people
20:26
and we'll leave the rest to the public system. They're
20:28
not improving patient outcomes. They're not
20:31
increasing utilization. They put up barriers
20:33
and blocks. Like you have to get a referral
20:35
from your primary care doctor. We
20:38
will do none of that. There are lots of ways to make
20:40
money in healthcare. Our investors sometimes
20:42
push us towards that. I
20:45
have fortunately had a successful start up.
20:47
I don't know, I'm not doing this, for the money primarily.
20:50
Just want to do what actually affects
20:53
patient health.
20:55
Yeah, we all have other things that we could be doing, there
20:57
are other ways to make money, but we, I've never
21:00
been in a more mission led company,
21:02
so thank you first.
21:03
Yeah, and it resonates with everybody,
21:05
even if they're healthy, we've got parents,
21:08
we've got grandparents, who like who wouldn't feel empowered
21:10
and able to help them out just with their
21:13
care make them feel a little more at ease during
21:15
a A time of struggle.
21:16
Completely. And actually I think one of the best use
21:19
cases for what we launched today, Vital.
21:21
io slash Translate, is
21:24
if you have an elderly parent or somebody
21:26
that you're caring for, especially
21:28
if they're elderly and they're a little confused
21:31
and they went to the doctor's office and they're like,
21:33
hey dad, what did ahhh.
21:37
Put their notes in there and see what actually
21:39
comes out. Diseases and issues
21:42
they actually have. Yeah. I
21:44
was talking I don't know the full story, but,
21:46
I had a friend whose sister died
21:49
basically because they didn't catch something
21:51
that was on page three or four. Because
21:53
humans can't scan text
21:57
that quickly. And you might have hundreds of pages
21:59
of medical history if you're a chronically
22:02
ill person. And sometimes
22:04
that history really matters. And doctors give
22:06
it like two or three minutes to maybe
22:08
scan through. AI does a way
22:10
better job of picking out. The stuff that
22:13
they might need. The fair comparison, this
22:15
is, listen, This, we've marked
22:17
it as 99. 4% safe
22:20
for animal adopters, independent people, employed
22:22
by the company. It's
22:24
not. Without risk. If 1 in 200
22:26
times, it'll miss something small. But
22:29
doctors miss something big. 1
22:32
in every 10 times. And so the stats
22:34
are actually much better for AI than they are for
22:36
humans, and that's the problem. And
22:38
when we, we have some great clinicians in
22:41
our team, when I talk to our
22:43
clinical staff, our advisory board
22:45
about the stuff that they really want to
22:48
see, all of them talk about patients paying
22:50
attention to and understanding their discharge instructions.
22:53
The value there is enormous. The value in terms
22:55
of long term care and in terms of immediate
22:57
outcomes is huge.
22:58
Nice. Were there
23:00
different specialties within
23:02
medicine that were a little more challenging?
23:06
We started out with medical imaging. Medical
23:08
imaging is nice because it's confined. CT
23:11
scans, x rays, MRIs and
23:13
then, what we released today I
23:16
don't know what people are going to put into
23:18
it. And so it has to be pretty robust
23:20
to doctor's notes, nurse's notes
23:23
lab results, all sorts of things. It's
23:32
time to wrap it up. Yeah. That's
23:35
a good time to wrap it up. We really
23:37
appreciate your time today to,
23:40
come talk to us guys. Such a product that I think
23:42
everyone can. You can benefit from learn
23:44
from other family members of these. And
23:46
just remind listeners at HTTTA The Rob's
23:48
Engineering Podcast.
23:51
Yeah, thank you. We really appreciate the
23:53
time. Yeah, Thank you for having us. Yeah, we never get
23:55
to talk about the nerdy tech stuff. Dude, we
23:57
can go even harder. Oh,
24:00
I'm gonna change the memory card for that, yeah, I
24:02
think I held off. I'm like, all right, tell us about your air handler
24:04
later. Yeah, I was like no,
24:07
we're not going that
24:07
deep. We're not going
24:08
that deep. Fantastic. Thank
24:10
you. Thank you
24:12
guys.
24:14
Thanks for coming to the prompt engineering podcasts
24:17
podcast dedicated helping
24:19
you be a better prompt engineer Episodes
24:22
are released every Wednesday I
24:24
also host weekly masterminds where
24:26
you can collaborate with me and 50 other people
24:29
live on zoom to improve
24:31
your prompts Join us
24:33
at promptengineeringmastermind. com
24:35
for the schedule of the upcoming masterminds. Finally,
24:39
please remember to like and subscribe.
24:42
If you're listening to the audio podcast, rate
24:44
us five stars. That helps us teach more
24:46
people. And if you're listening to
24:48
the podcast, you might want to join us
24:50
on YouTube so you can actually see the prompts.
24:53
You can do that by going to youtube. com
24:56
slash at prompt engineering
24:59
podcast. See
25:01
you next week.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More