Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Hello
0:05
everybody,
0:08
welcome to another edition of the ThoughtWorks Technology
0:11
podcast. My name is Ken McGraige. I'm one
0:13
of your regular hosts. I had
0:15
a little bit of a special edition this time. I'm
0:17
at the Microsoft Ignite show. So
0:20
recording this here in late November and was
0:22
lucky enough to coincide with
0:24
the schedules of a
0:27
couple of ThoughtWorkers I think you'll find interesting.
0:29
So today we're going to talk about some
0:31
legacy modernization meets generative
0:33
AI. My guest today,
0:36
Tom Cogrove. Tom, you want to introduce yourself? Yep.
0:39
Hi folks, my name is Tom Cogrove. I'm a
0:42
technologist here at ThoughtWorks where I've been
0:44
looking to how we can speed
0:47
up a way in which we do modernization for
0:49
our clients and particularly looking into mainframe modernization of
0:51
late. Great, thanks. And Shodan Chet,
0:53
do you want to introduce yourself please? Yeah,
0:56
so I also am a technologist at
0:58
ThoughtWorks and play a similar role to
1:01
Tom and also focus on all the
1:03
things legacy modernization. Great.
1:05
So one of the common jokes
1:07
here at any large event is
1:11
don't play a drinking game around the
1:13
word AI or the phrase AI because
1:15
you won't survive. And
1:17
so we know everybody was flooded with AI, but you
1:20
all are doing some really interesting hands-on practical
1:22
work that's actually been going on for quite
1:24
some time now. So can
1:27
we say like for understanding the
1:29
shift, how is generative AI helping
1:31
you comprehend these complex long-standing code
1:33
bases for modernization? I guess
1:36
when you think about an existing
1:39
legacy code base, we're talking about something that's maybe 20,
1:41
30, 40 years
1:43
old, likely kind of in the 40 in the
1:45
kind of many millions of lines of code. And
1:48
so one of the problems with that
1:50
is I guess the amount of time it takes humans
1:53
or individuals to actually kind of get through, to
1:55
actually get through that code, read through it, understand
1:57
it, build a mental model in their head. then
2:00
be able to then explain that back on to other people to
2:02
work out what to do about the problem is
2:04
huge, right? Like that takes years. I mean,
2:06
we've worked at organizations that pass through it.
2:08
They measure the amount of time it takes
2:10
for a new mainframe developer to get on
2:13
boarders as around sort of five to 10
2:15
years before they really become truly effective across
2:17
the whole code base. When we think about
2:19
this new technology that's come in, generative AI
2:21
is excellent at, you know, it's
2:23
not all powerful, but it's very good at kind of being able
2:25
to kind of elaborate and
2:27
summarize and explain documents,
2:31
large amounts of text. And what is a
2:33
code base? It's not that it's a very,
2:35
it's a much more structured kind of set
2:38
of documents, but in effect it is documents with text
2:40
inside it. And so that's
2:43
where we're seeing kind of like, oh, that's how we're
2:45
seeing generative AI being able to help with
2:47
comprehending these complex, non-standard code bases is to
2:50
be able to kind of explain it to
2:52
humans and summarize the key facts and keep,
2:54
keep hearts to put to humans at a
2:56
much faster pace than previous people have previously
2:58
been able. How did
3:00
you do it before now? Yeah. So I
3:03
think in these, in
3:05
legacy modernization has been a
3:07
longstanding problem and there are
3:09
many solutions pre-Gen AI, but
3:11
in our experience, they tend
3:14
to be fairly mechanical and
3:16
they don't help with the issue around
3:19
making it human scale. And I use
3:21
that phrase without, without it
3:23
being a strong phaser. So I'll try to explain it. When
3:26
you're looking at anything beyond, I don't know, a
3:28
thousand, two thousand lines of code, it,
3:31
it doesn't fit your brain, right? It's not, it
3:33
becomes a non-human scale. Machines are really good at
3:35
it. Humans are not good at sort of
3:37
keeping that in their head. And so these reverse
3:39
engineering tools or comprehension tools that existed weren't really
3:42
solving that problem. So you ask one of these
3:44
reverse engineering tools about, Hey, explain to me
3:46
how this business process works and it'll give you
3:48
200 to 500 to 1000 node flowchart. That's not
3:50
something humans
3:54
can sort of consume because
3:56
you're sort of get lost in, you can't remember those
3:58
many things at the same time. And
4:00
so I think one of the things that has changed
4:02
with Gen AI is it can sort of abstract it
4:05
up and I think that's one of the key benefits
4:07
of using technology. So it gives you a human scale
4:09
answer and then you can sort of dig deeper, right?
4:11
So it could give you an answer that's maybe enough
4:14
to just give you a high-level perspective of how
4:16
that business process is working and then you can
4:19
sort of query further. And I think that's how
4:21
humans work in terms
4:23
of understanding large pieces. You first get a
4:25
high level and so then you dig into
4:27
different parts, whatever parts you might be interested
4:29
in. So there were tools before this but
4:33
I think the efficacy was not
4:35
great and I think that's what's
4:37
changed now. So do
4:40
people just fire up
4:42
their open AI or Google tools?
4:45
I mean, what have we done to help
4:47
you with this process? So
4:49
yeah, when we first started, that was
4:51
actually the approach that we took, right? A lot of people in
4:54
the industry, I think, were seeing a similar sort of thing to
4:56
this and so there was a lot of kind of excitement around
4:58
firing up chat GPT, pasting
5:01
in chunks, like chunks or files
5:04
worth of code and sort of seeing what it
5:06
could help with. And so some of the early
5:08
kind of like Intel tooling that we explored used
5:10
this approach. It was kind of like a relatively,
5:12
you know, it was more to do with prompt
5:14
engineering about tuning that prompt to be able to
5:16
get just the right amount of information out of
5:18
the piece of code that we're looking at, right?
5:22
But since then, we kind of have come to
5:24
recognize what I guess over the course of the last 18
5:26
months. So we've recognized some of the limitations that you run
5:28
into with that
5:30
approach, right? Code isn't, you
5:33
know, not everything to do with understanding a given piece
5:35
of code is in the same file as that piece
5:37
of code, right? Like you have dependencies, you have, you
5:40
know, the data scheme is tens of lib
5:42
elsewhere. And so you first understand any one
5:44
element of code, you have to
5:46
be able to kind of look around that as
5:48
well. And also, unlike
5:50
a good essay, right, where you have a
5:52
beginning, middle and end, code isn't necessarily organized
5:54
in that way. There's a different
5:56
kind of structure and flow to a document of code.
5:58
And you have to be aware of that. when you're
6:00
reading through it so that you don't kind of like
6:03
correlate things together that aren't related. And so
6:05
we kind of recognize these sort of these
6:07
structural challenges that all these challenges that can
6:09
only be dealt with by thinking about the
6:11
structure and nature of code and then started
6:13
expanding the tooling that we were looking at
6:16
to be able to take a balance ship
6:18
there using things like parses and kind
6:21
of dependency graphs to be able
6:24
to power the kind of the understanding of that piece
6:26
of code or the explanation of that piece of code
6:28
a bit better with kind of related information and be
6:30
able to walk through it more
6:32
easily. As
6:34
Tom was saying, we went through
6:36
lots of different experiments to figure
6:38
out the right sort
6:40
of answer for this problem. The
6:43
problem being how do I understand a
6:45
large legacy system? And one, I
6:47
guess, insight we
6:49
had while we were going through those experiments, some
6:51
of them not successful as well, is
6:55
the trick is to give the LLM
6:57
the right context. And
7:00
in a large code base, getting given the
7:03
right context is a hard problem. Somebody
7:05
uses this analogy that
7:07
it's like an open book test. If you know where the
7:09
answer is, it becomes a much simpler
7:11
problem. If you don't know where the answer
7:14
is, then the book doesn't help. And legacy
7:16
systems are not a book because they're not
7:18
that well written. But I'm sure you can
7:20
understand the analogy that LLMs need
7:22
to be given the right context for them to
7:24
generate the right answer. And
7:27
so that's where our engineering is focused at. All
7:30
the engineering is focused on how do we
7:32
get the right context for the question from
7:34
the user. Yeah, it's a little bit
7:36
of an optimising to get just the right thing because of the
7:38
context window limitations as well. So
7:41
I should have mentioned it at the top, but y'all
7:43
are two of three authors of a fairly large
7:46
article on this, which we'll link in the comments so people can
7:48
read it. But one of
7:50
the things you talk about there is that the challenges
7:53
of legacy modernization. We've already touched on it a
7:55
little bit, but from an
7:57
organisational perspective, what are the
7:59
kinds of challenges that you see people
8:01
that generate AI specifically helps mitigate. I
8:05
think at the highest level, we
8:07
sort of alluded the cost time
8:09
value equation of legacy modernization, right?
8:11
And like, I
8:13
always feel there was a
8:15
time where we used to talk about stuff like, Hey,
8:18
that's rocket science or it's not rocket science.
8:20
And I think there's something that our industry
8:22
needs to do when it's easier to launch
8:25
rockets in space and to modernize legacy systems,
8:27
right? Like, I mean, so, so, so the
8:29
cost time value. Equation of legacy modernization sort
8:31
of is eluding into that. Like how much
8:34
effort is it
8:36
to modernize legacy systems versus
8:39
some other things happening in
8:41
the world. And
8:44
I guess primarily what we have,
8:46
one of the hypothesis we are applying is a lot
8:49
of the cost time is also because of
8:51
the cost of delay of understanding legacy systems
8:54
because character one characteristic or
8:56
many of the characteristics of legacy systems
8:58
are around the fact that documentation is
9:00
stale or absent. There are no good
9:02
safety nets around it. The SMEs around
9:05
it have either disappeared, moved
9:07
on, or are just not there.
9:10
And that adds a lot of cost
9:12
of delay to that modernization program. So
9:14
that's one element of it. The other
9:16
element of it is now addressed by
9:19
a lot of the forward engineering training,
9:21
I too, so coding assistance and the
9:23
like, which is the legacy system is
9:27
30, 40, 50 years of investment of an
9:29
organization. So there's, it's quite a bit of
9:31
quanta, right? And the expectation is to reproduce
9:33
it in months or years.
9:36
And it's just by nature, it just takes time
9:39
to replicate something that you've been working on 30,
9:41
40, 50 years, right? So
9:43
I think what we are trying to
9:45
do is figure out the right pockets
9:47
where Jenny I can impact
9:49
the cost time value equation of these
9:52
legacy modernization programs and
9:55
code comprehension is definitely one area
9:57
where we've found a lot of.
9:59
success. Coding
10:02
assistance is another area that I think we've all
10:04
as an industry seen some success. And
10:07
it feels like this is still
10:09
in the tip of the iceberg. There's more
10:12
areas to explore and see
10:14
how we can make that impact better. You
10:18
touched about years and decades there. I
10:21
know some of the systems I've worked on,
10:23
we have no idea why things were even
10:25
in there. I mean, not only what, but
10:27
what was the purpose or whatever. In your
10:29
article, you actually talk about, I don't think
10:31
you use these terms, but back porting and
10:33
getting the requirements from the code. Somehow looking
10:36
at the code and trying to understand what
10:38
is this trying to do? What's
10:40
that process? What's that look like? And what are
10:42
the benefits to doing that? So I
10:44
guess what is the purpose or what's the process of
10:46
modernization? Modernization is there
10:49
to typically replace or refresh
10:51
a set of technology that you can
10:53
no longer maintain or no longer is
10:55
fit for purpose, isn't allowing you
10:57
to change at the pace that you need to. However,
10:59
it still is performing a vital function
11:01
for your business. So
11:07
that function still
11:09
needs to, at least parts of that function will
11:11
still need to continue in the new
11:13
modernized world. And so when
11:17
we're talking about kind of modernization, one of the things that
11:19
we need to do is get the requirements, we'll get the
11:21
understanding as to what the code is doing or what the
11:23
system is doing. And so using
11:26
generative AI, we're seeing
11:28
that we can speed that process up using generative AI.
11:33
So the process itself looks like, I think
11:36
what we were describing earlier, like using the
11:39
generative AI, large language
11:41
models and abstract syntax trees and
11:43
parses and dependency graphs to be
11:45
able to walk through that, provide
11:47
exactly the right context to that
11:49
LLM and prompt it to produce
11:51
descriptions about kind of what the code is doing,
11:54
which we then can treat as requirements that we
11:56
decide whether the set of fours are not right.
11:59
Another hallmark of that. of legacy systems is that
12:02
they have, you know, the processes
12:04
that they are in place are there because
12:06
they haven't changed, I guess, over the, they
12:08
haven't been updated as the businesses change over
12:10
time. And so people may still be, or
12:12
the employees of the company may still be
12:14
following, you know, processes that
12:16
are kind of out of date or
12:19
in their unnecessarily, unnecessarily complex. And so
12:21
for us, one of the
12:23
reasons why we like to have a
12:25
human, I guess, a human could have
12:27
been that loop or a, a, a
12:29
modernization process that involves kind of re-engineering,
12:31
re-architecting, re-imagining what the
12:34
future looks like is so that we can get rid of
12:36
the cruft, the, the kind of the dead code of business
12:38
processes as well as the dead code itself when
12:40
we're going forward. So yeah, and
12:42
this is the benefit there is that using generative
12:45
AI, obviously it's going to be much, much less
12:47
time, hopefully much faster to, to kind of get
12:49
those requirements out. But then you still
12:51
need that human in the loop to kind of get rid
12:53
of what's not needed as well. So you touch on something
12:55
that I think is important there, the human in the loop.
12:57
One of the things that I know
12:59
with different thoughtworks topics, whenever we talk
13:01
about generative AI or AI or machine
13:04
learning or what have you is
13:07
the necessity of having someone that
13:09
knows what good looks like. How
13:12
does that work? Or is that a factor here? Because
13:14
I mean, I could just write a program, right? That
13:16
translates the thing, but how,
13:18
and what is that human in the loop?
13:20
How do you use, whether it be tools
13:22
or processes or, you know,
13:25
sticks or carrots, how do you
13:27
get the human in a loop that knows what
13:29
good looks like to participate? We
13:32
were talking about two, maybe different areas of
13:34
application for AI. And maybe the answer is
13:36
different for those two in the
13:38
area that we just talked about in terms
13:41
of like comprehension, it's almost
13:43
an easier answer because the consumer
13:45
is a human. I think there's less of a question
13:47
of how to get them in the loop because they're
13:50
at one side of the equation, right? One side of
13:52
the equation is the legacy system. And then
13:54
there's some technology in between the other side
13:56
of the equation, consumer is the human. And
13:58
so is
16:00
more technical, like what is this function
16:02
doing, etc. But
16:04
one of the things we hear from people in the
16:06
quote unquote business is, well modernization,
16:09
you're just taking functionality that I have
16:11
and rewriting it in a different language.
16:15
Is this helping with building, what are the
16:17
high level explanations that
16:19
we can share with the non-technical stakeholders,
16:21
put it that way, during a
16:24
modernization project? Does this help with that at
16:26
all? When you're kicking
16:28
off a modernization program, we've talked a little
16:30
bit about some of the challenges that businesses
16:32
are trying to overcome by changing that technology
16:34
stack. But even within
16:37
that modernization program, there tend to
16:39
be a priority
16:41
that you can apply to aspects
16:43
of the system. So
16:46
choose the order in which you might want to modernize
16:48
or choose the things, to understand
16:50
at a higher level, what are the things involved in
16:52
that system, so that you
16:54
can make decisions about what you
16:56
need to necessarily re-imagine or re-engineer
16:59
those functions, or
17:01
whether you can potentially look to now buy
17:04
something off the shelf to support them. And so it's
17:07
quite hard, if we're talking
17:09
at a level of, say like, stories or
17:11
Ruben epics, in terms of the requirements you've
17:13
extracted from existing code, it's very hard to
17:16
abstract sufficiently to be able to compare whether
17:18
what you've got in that
17:21
code is fulfilled by an existing system. So I
17:23
mean, taking a very, very basic
17:25
example, if we talk about like identity systems,
17:27
right? Back in
17:29
the old day, you had to roll your own,
17:31
a lot of legacy systems will have their
17:34
own identity and access management sort of
17:36
capabilities built into them. And so
17:38
nowadays, you don't typically do that, right? Like it's a huge
17:40
risk to run your own one of those. Most people these
17:42
days will be buying something off the shelf or licensing a
17:45
SaaS product to do that. And so
17:47
it's that kind of decision about how about
17:49
being able to replace those functions, those
17:52
capabilities, existing system, you need to go to abstract to
17:54
a sufficient level to know that
17:56
there's something out there in the industry to
17:58
do that for you. developed
24:00
in paradigm, right? And so translating
24:03
between those two paradigms in,
24:06
I guess, in our opinion has not yet been
24:08
achieved. There are lots of code translation tools out
24:10
there actually. And the common sort of
24:12
view is it's cobalt to joe
24:14
wall, right? So you will see cobalt in
24:16
the shape of a jahwa class, but it
24:18
would have similar variable names that it had
24:20
in cobalt. It would
24:22
have similar method calls
24:25
as they were there in cobalt,
24:27
right? And that
24:29
wouldn't be modernization because you
24:31
modernize it to decrease your
24:33
cost of change. Now, if
24:36
a new jahwa system is as
24:38
difficult or more difficult to understand than the
24:40
cobalt system, then you've not sort of achieved
24:42
your goal. So there's lots of translation tools
24:44
out there, but we are
24:47
hoping with Gen AI
24:49
that translation can be higher quality because
24:51
one of the challenges is how do
24:53
you translate procedural code to object-oriented code?
24:55
How do you translate the variable names
24:57
that were written 30 years
25:00
ago that were more optimized
25:02
for maybe storage or,
25:04
you know, the constraints of hardware at
25:06
that environment to today's computing paradigm, which
25:09
is completely different. So
25:11
it's an area that we think
25:13
there is some hope, but again,
25:15
there's a lot more to do
25:17
before we can say this is an
25:19
area that this can help. So
25:22
just two more questions to close it out.
25:24
And these are going to be pretty speculative.
25:26
Fair warning. And not necessarily
25:29
just, I know both of you live
25:31
in the modernization world right now, but
25:33
just in general, if you crystal ball,
25:36
you know, is Gen AI
25:38
the miracle that they would
25:40
have us believe in the keynote we saw this week? It's,
25:43
I mean, yeah, predicting future is always
25:45
hard. And
25:47
we have seen, I guess, there is precedent of
25:49
multiple technologies being on the hype cycle and then
25:52
not, not sort of doing as
25:54
well as we thought. But overall, we
25:56
feel we need to be cautiously optimistic. So a
25:58
bit of caution to make sure or you
26:00
don't believe in everything that's being said, it
26:02
definitely can't just can't solve all the problems of
26:04
the world. And
26:07
so our focus is always on
26:09
problem solution fitness. So find the
26:11
problem that's a fit for this solution rather
26:13
than just trying to apply it
26:16
at all places. I
26:18
think there is still a
26:20
place for traditional AI, right,
26:22
like all the other forms
26:24
of AI that were there, pre-Gen AI. Again,
26:27
one learning that we've had through development of this
26:30
tool is actually Gen AI works better when you
26:32
pair it with these tried and tested other technologies,
26:34
right? So a lot of this tool is abstracts,
26:37
interact trees, graphs, these are all
26:39
technologies and approaches that have been
26:41
there for 20, 30 years. And
26:44
so we've married it with Gen AI
26:47
rather than just Gen AI being the answer
26:50
to everything. So I think we
26:53
are definitely being cautiously optimistic,
26:55
but and sort
26:58
of investigating more avenues where we think
27:00
it can help. But
27:02
to be honest, time is the best future teller.
27:05
Somebody on a previous podcast said, you know, we still
27:07
need to know if else, not everything's
27:09
AI. Tom, what about you?
27:12
What do you think about the future? Where are we,
27:14
where's Gen AI going to be effective or not? And,
27:17
you know, I know these are guesses, folks. I don't mean
27:19
to put you on the spot, but I think your guesses
27:21
are better than most people's facts. One
27:23
area I'm particularly hopeful for, if not,
27:25
maybe not excited, I guess, but one
27:27
thing I think could be very
27:29
powerful is, especially in the field of modernization, or looking
27:31
just a little bit further ahead in terms of modernization,
27:34
is around the kind of the safety
27:36
nets that we might want to have in place for, that
27:40
we need to get put in place to
27:42
be able to do that modernization, right? So,
27:44
you know, when you're taking some 40-year-old system
27:46
and trying to move to a new modern
27:49
architecture or a new modern kind of version of it,
27:52
you need to make sure that, you know, where
27:54
you are replacing a function
27:56
that used to, that has been there for
27:58
a long time. pace at
28:00
which you can do that is limited by the quality
28:02
of the tests or the existence of tests, which
28:06
aren't always there in these cases. So I think
28:09
one of the areas that I'm
28:12
excited or hopeful that we'll see is
28:16
how can generative AI help us get more
28:18
safety around the existing systems to be able
28:20
to not just describe what they do, but
28:22
almost provide test cases for how that works
28:26
right now so that we can then cross compare against the
28:28
future as well. That's definitely
28:30
one area I'm super excited about.
28:32
I guess I'm a little bit
28:34
less, maybe a little bit less,
28:36
I'm very excited by kind of
28:38
like a lot of the experimentation that's going on around
28:40
kind of like the creation of software. I
28:43
guess the creation of software by kind of like
28:46
generative AI agents or by all these sorts of
28:48
things. But I think I should have saying there's
28:50
gonna be a necessary kind of set of framework
28:52
or infrastructure that sits around these LLMs to be
28:54
able to ensure the right context is given to
28:56
the LM at time when we're generating code or
28:59
when we're trying to create a system. And
29:02
there is complexity in that. I think there's
29:04
one level of complexity to be able to
29:06
extract understanding out on the existing system and
29:08
represent that in graphs or whatever. It's probably
29:10
an extra couple of orders of working to
29:12
shoot harder to be able to retain that
29:14
context as we're building out a new system
29:18
on the other side of that, like almost
29:20
in reverse of what in reverse of the
29:22
kind of the understanding explanation sort
29:25
of thing. So I'm excited about that. I think it's
29:27
a probably a little way off. Hopefully
29:29
it's gonna be proved wrong, but I think it's probably still a little
29:31
way off at this point. What are
29:34
the key considerations and best practices, you know, what
29:37
they recommend to it just occurs whether
29:39
or not they're using ThoughtWorks in our
29:41
tools, frankly, or not for an organization
29:44
that's wanting to leverage GenAI for legacy
29:46
system. You know, what are some
29:48
concrete next steps they can do to try to
29:50
help ensure success? So I
29:52
maybe repeat some of the stuff that I've
29:55
already mentioned. So don't look
29:57
at GenAI as the one stop. solution,
30:00
the silver bullet. At
30:04
least in our experiences, all production solutions
30:07
are a combination of technology paradigms
30:09
that are sort of maddied with
30:11
GenAI. The other one
30:13
I would say is focus on
30:15
problem solution fitness and treat
30:20
it as, if you're doing
30:22
something new, treat it as an experiment, be
30:24
ready to pivot and
30:26
learn from that experiment and try different approaches.
30:31
And then the other thing, I guess, is
30:34
that this
30:36
is a different paradigm in terms
30:38
of the determinism that we are
30:40
used to. So it
30:43
does take a while to start becoming
30:45
comfortable with that. And
30:48
there are some, I
30:51
think, new skills that we will have to add to
30:53
our prompt engineering is the
30:55
most commonly talked about skill
30:57
now. It takes some experience to figure out the
30:59
right crafting of those prompts and the right way
31:01
to sort of ask it a question, the right
31:03
context to provide it. And
31:06
so those are all things that organizations will have to
31:08
learn with or without topics
31:11
if they want to survive in a GenAI world.
31:14
Tom, what do you think? What's your actionable advice
31:16
here? It's okay if there's some
31:18
overlap. Yeah, no, I think I've probably just
31:20
built on a couple of things that Shona
31:22
was sort of sharing there. One of the
31:24
things that allowed us to
31:26
get around that experimentation point
31:28
that Shona was making, I think the
31:31
thought, the fact that we at ThoughtWorks had
31:33
internal access to some of these tools to
31:36
be able to experiment and drive this stuff
31:38
forwards was actually part of the reason we've
31:40
made as much progress as we have. So
31:42
I think a call out for organizations would
31:44
be how can you put these tools, how
31:46
can you quickly put these tools safely into
31:48
the hands of your teams, your employees, so
31:51
that they can discover ways in which to
31:54
improve their processes, change
31:56
the way in which they're doing things, hopefully
31:58
for the better of their overall life. So definitely
32:00
one call is to enable kind of
32:02
access there. I think the second one
32:04
is building kind of what Shodha was
32:06
saying around like different
32:08
approaches, right? One
32:11
specific example that comes to my mind is around kind of,
32:14
you know, where previously developers have written kind
32:16
of like, you know, X unit tests to
32:18
kind of like to specify
32:20
the behavior of the stuff. We can't expect that with
32:23
generative AI, right? And so it's
32:25
about shifting to like look at evaluation techniques,
32:28
like for instance, using kind of LLMs, charges,
32:30
kind of like one on one that's doing
32:32
the rounds at the moment to be able
32:34
to validate or to be able to evaluate
32:36
the quality of the output of some of
32:39
these drones of AI based systems. It's like
32:41
that mindset shift of yeah, determinism to sort
32:43
of how do you deal with non determinism,
32:45
but still get confidence in the system you're
32:47
building is definitely a shift in,
32:49
yeah, definitely a shift and
32:51
definitely making certainly me feel uncomfortable,
32:53
but getting through it. Thank you
32:55
both. I'm taking the time, especially, you know,
32:58
you're here in Chicago for an event and got to head
33:00
home to London and very
33:02
much. Thank you. And thank
33:04
you to our listeners and we'll see you next time. Thanks
33:08
again as well. Bye.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More