Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:05
Welcome to the Analytics Power Hour. Analytics topics covered conversationally
0:10
and sometimes with explicit language. Hi, everyone. Welcome to the Analytics
0:16
Power Hour. This is episode 266. I'm Moe Kiss from Canva.
0:23
And I'm sitting here in the host chairs today because we're continuing our
0:27
great tradition of recognizing International Women's Day and all of the
0:32
amazing women in our industry. So it's coming up this Saturday,
0:35
March 8th, and we're going entirely gents free today. So of course that
0:40
means I'm joined by the wonderful Julie Hoyer from Further. Hey,
0:44
everyone. And Val Kroll from Facts and Feelings as my co hosts.
0:48
Hey, Val. Hello. Hello. Are you ladies excited to know that Tim won't
0:52
be slipping into some of his quintessential soapboxing?
0:56
Save some for the rest of us. I don't think he'd be able
0:59
to help himself on this one. I know, I know. He's pretty gutted
1:04
to miss it. So, as we're planning the show today, I fired up
1:07
ChatGPT, which, to be fair, I'm a power user and I asked it
1:11
to compare our topics from the last 50 shows to the topics data
1:15
folks are most talking about these days and basically identify the gaps
1:18
in our content. So, unsurprisingly, the response it came back with was that
1:23
we should definitely talk more about AI, and it was in caps,
1:28
so maybe there's some bias in that model. Who knows? Weird.
1:31
But it's got a good point. And we've definitely talked about AI on
1:35
multiple episodes on the show, but we probably haven't talked about it nearly
1:39
as much as we could or as much as it's getting talked about
1:42
in the industry right now. So it seems like everyone is just so
1:46
excited about the possibilities. But lots of organizations are also struggling
1:50
to figure out how to actually identify, scope, and roll out AI projects
1:55
in a clear and deliberate manner. I think it's really about that shift
1:59
from the tactical day to day things to the real transformation that everyone's
2:04
seeking. And that's why for today's episode, we're joined by Kathleen Walch.
2:09
Kathleen is a director of AI Engagement Learning at the Project Management
2:14
Institute, where she's been instrumental in developing the CPMAI methodology
2:19
for AI project management. She is the co host of the AI Today
2:23
podcast, which I highly recommend checking out, and she's also a regular
2:28
contributor to both Forbes and Techtarget. She's a highly regarded expert
2:32
in AI, specializing in helping organizations effectively adopt and implement
2:37
AI technologies. And today she's our guest welcome to the show,
2:41
Kathleen. We're so pumped to have you here. Hi and welcome.
2:44
I'm so excited to be here today. I obviously love podcasts,
2:48
so I love being guests on them as well. It's a different seat
2:50
for me today. It is definitely a different seat when you're a guest.
2:54
Hopefully a little lighter on the load. So just to kick us off,
2:59
I think one of the things that's really interesting about your professional
3:02
history is that you don't seem to be one of those people that
3:05
just stumbled into AI in the last year or so and have gone
3:10
full fledged on it. It really seems to be an area that you've
3:14
been working in deeply for an incredibly long period of time.
3:17
Maybe you could talk a little bit about your own experience and
3:20
the journey you've taken to get here. Yeah, I like that you bring
3:24
that up. I always say that I've been
3:26
in the AI space since before gen AI made it popular.
3:30
I feel like the past two years or so, everybody feels like they're
3:33
an AI expert and everybody is so excited about the possibilities.
3:38
But it's important to understand that we always say AI feels like the
3:42
oldest, newest technology because the term is officially coined in 1956,
3:47
so it's 70 plus years old. But we just feel like we're now
3:53
getting to understand AI. And there's a lot of reasons for this,
3:57
which we talk about quite often. But one big reason is that there's
4:00
been two previous AI winters, which is a period of decline in investment,
4:04
decline in popularity. People choose other technologies, other ways of doing
4:08
things, and a big overarching reason for that is over promising and under
4:11
delivering on what the technology can do. So it's really important to understand
4:14
that AI is a tool, and that there's use cases for it,
4:19
and it's not a tool that's one size fits all approach,
4:22
especially when it comes to generative AI. So my background and what got
4:26
me here is actually I started off in marketing and then moved...
4:29
Yeah, I know. And then back when I was first coming out of
4:33
college, my husband's a software developer. I feel like the technology world
4:37
and marketing or creative world or anything else, they really were very
4:41
separate. And over the years they've merged closer together to the point
4:45
now that I think technology is infused in many different roles and not
4:50
as disparate as it used to be. Then I moved more into a
4:54
data analytics role. Learned all about the pains of big data,
4:58
how data is messy and not clean and all of that. And then
5:04
I moved into more of a technology events role where my husband and
5:10
I had a startup. It failed, but met a lot of great people
5:13
from that community. Ended up going with my business partner from Cognilytica
5:17
for a company called TechBreakfast, where we did morning demo events throughout
5:22
the United States. And we were in about 12 different cities.
5:26
So from Boston, Massachusetts to the Baltimore DC Region, North Carolina,
5:32
Austin, Texas, really all over, a little bit in Silicon Valley.
5:36
But that's a unique space. And around 2016 we started to see a
5:40
lot of demos around AI and in particular voice assistants and how we
5:47
could be incorporating that. That was when all of the big players in
5:51
voice assistants started to come out. So we had Amazon Alexa and Google
5:54
Home and Microsoft Cortana, when that was still a thing. So from that
5:58
we said, there's something here. And we started an analyst firm actually
6:03
focused on AI. It was a boutique analyst firm, and very quickly realized
6:09
that organizations did not know how to successfully run and manage AI projects.
6:15
So they said, we want to use this technology, this is great.
6:18
How do we get started? And we said, okay, well, let's see if
6:21
there's methodologies out there and let's see if there's a way,
6:24
a step by step approach to do things. And what we quickly realized
6:27
is that there wasn't. And that's how CPM AI was developed,
6:30
which is a step by step approach to running and managing AI projects.
6:34
And it was important because people would try and run these
6:39
application development projects. And then you very quickly realize that
6:42
they're data projects and they need data centric methodologies, not software
6:47
development methodologies. And so these projects would be failing. Or they'd
6:51
say, we want to do AI and we go, well, what exactly do
6:54
you want to do? And they go, well, we have all this data, let's just start with the data and then let's just build this, pick
6:59
the algorithm and then move forward because there's FOMO, fear of missing
7:03
out. And we say, okay. But in CPMAI we always start with phase
7:08
one, business understanding, what problem are you trying to solve? And even
7:12
still today, many organizations rush forward with wanting to have an AI
7:18
application or just saying, oh, look at this large language model,
7:20
let's put it on our website as a chatbot. And far too often many
7:25
things can go wrong. We always say, AI is not set it and
7:27
forget it. So far too often we see that these chatbots are providing
7:32
wrong answers and that maybe we shouldn't have started so big in our
7:37
scope and we should have really controlled it and said, drill down into
7:41
what we're actually trying to solve. So we always say, figure out what
7:44
problem you're trying to solve first and really, really make sure that it's
7:48
a problem that AI is best suited for. Oh, my God,
7:52
this is music to my ears. I am seriously. Yeah, because there is...
7:55
I feel like I'm coming up so often against people that are just
7:59
like, let's use AI. And you're like, what's the problem?
8:03
Have you noticed, though, over the last few years, and I feel like,
8:06
especially in the last 12 months, do you feel like the industry is
8:09
maturing here or is it Groundhog Day where you just feel like you're
8:12
having the same conversation again and we're not at that stage yet where
8:16
people are maturing enough to be like, is AI the right solution here?
8:20
What are you seeing in the industry? So generative AI has made AI
8:26
available to the hands of many. So maybe we were using AI before
8:31
when we were pulling up for directions or which, whatever you choose to
8:37
use Waze, Google Maps, whatever it is that you're using it,
8:39
it'll help route you. Or if you have predictive text with emails or
8:44
spam filters, that's using AI. But it didn't feel like we were using
8:48
AI because, yeah, it helped a little, but it didn't really
8:52
make my life more efficient. But now with tools like Chat,
8:55
GPT or at PMI, we have Infinity or Claude. I mean,
8:59
you literally pick the tool of choice and it can help you do
9:05
your job better. So it can help you... Or even Canva,
9:08
right? I love Canva. How I'm not a graphic designer by trade,
9:12
but now with the help of Canva, which is drag and drop,
9:15
but then apply add AI capabilities onto it, I can do things that
9:19
I couldn't do before, like automatically remove background from an image
9:24
and just have one, like my head now and I remove the background
9:28
from it, which is absolutely incredible and does not require me to have
9:32
to learn how to be a graphic designer. Or I can write better
9:35
copy for marketing campaigns, or I can create images for PowerPoint slides
9:40
that I no longer have to worry if I have rights to,
9:42
but because I know I do, because I just created it.
9:45
So it is helping in that way. But then we also see, you
9:50
really need to drill down and say, okay, generative AI is just one
9:54
application of AI. And so a number of years ago, actually back in
9:57
2019, because people said, well, I want to do AI. And we said,
10:00
well, what exactly are you trying to do? And there was a lot
10:04
of confusion about, is this AI, is this not AI? And we said,
10:08
why don't we just drill down one level further and say,
10:10
what are we trying to do? And that's where we came up with
10:13
the seven patterns of AI. So we looked at hundreds, if not thousands
10:16
of different use cases and they all fall into one or more of
10:19
these seven patterns. And so we said, why don't we just talk at
10:22
that level? Because then it really, it helps you with so much.
10:26
So the patterns at a very high level. And we made it a
10:28
wheel because it's no particular order and one isn't higher than another,
10:32
but it's hyper personalization. So treating each individual as an individual.
10:36
We think about this as a marketer's dream. You're able to hit the
10:39
right person at the right time, at the right message, but also hyper
10:41
personalized education, hyper personalized finance, hyper personalized health
10:46
care. How can we really start treating each person now as an individual?
10:50
And we can do that with the power of AI. Then we have
10:53
recognition patterns. So this is making sense of unstructured data. 80 plus
10:57
percent of the data that we have at an organization is unstructured.
11:00
Well, how do we make sense of that? So we think about image
11:03
recognition in this pattern. But you can have gesture recognition, handwriting
11:08
recognition, there's a lot of different things. Then we have our conversational
11:12
pattern. So this is humans and machines talking to each other in the
11:16
language of humans. This is obviously where large language models fall into
11:19
play. We think about AI enabled chatbots here. Then we have our predictive
11:24
analytics pattern. So this is taking past or current data and helping humans
11:28
make better predictions. So we're not removing the human from the loop,
11:31
but using it as a tool to help make better predictions.
11:34
Then we have our predictive analytics and decision support. So this is where
11:39
we are able to look at large amounts of data and spot patterns
11:42
in that data or outliers in that data. We have our goal driven
11:46
systems pattern. So this is where really around reinforcement learning and
11:50
optimization. So we think about how can you optimize certain things.
11:55
We've seen this actually with traffic lights. Some cities are adopting this
11:59
to help with the traffic flow and it can be adaptive over time.
12:03
And then also the autonomous pattern. So this is where the goal of
12:07
the autonomous pattern is to remove the human from the loop.
12:10
So this is the hardest pattern pattern to implement. We think about this
12:12
with autonomous vehicles, but we can also have autonomous business processes.
12:16
So how do we have something autonomously navigate through our systems
12:22
internally at our organizations? And so when we say, okay, well,
12:26
what are we trying to do now? This helps us figure out what
12:28
data requirements we need. This helps us figure out if we're going to
12:31
be building this from scratch, what algorithm do we select? If we're going
12:35
to be buying a solution, what's going to be best suited for this?
12:39
Large language models aren't great for everything, and generative AI isn't
12:43
great for everything. So if we need a recognition system, then maybe we
12:47
shouldn't be looking at a large language model for that. If we want
12:51
a conversational system, then yeah, then that's great. And this really helps
12:54
us to drill down that one level further and say, what problem are
12:58
we trying to solve? What's the right solution to this problem?
13:01
Is AI the right solution? Okay, if it is, which pattern or patterns
13:04
of AI are we going to be implementing? And then from there we
13:08
can say, okay, we know what problem we're solving, AI is the right
13:12
solution for this, and now we can move forward. And if it's not
13:15
the right solution, that's okay. But you have to be honest with yourself
13:19
and with the organization. Because sometimes, I always say, don't try and
13:23
fit that square peg in a round hole. You don't want to shoehorn
13:26
your way just because you want to use AI, so you create the
13:31
problem that I can solve rather than actually having it solve a real
13:35
problem. That was actually going to be my question. When you talk to
13:39
clients, do you end up showing them the seven patterns to start,
13:45
or is that like showing them the answers and then they want to
13:49
pick which one sounds coolest or that they had their mind set on
13:53
and then they shoehorn and create the problem. Do you have to try
13:57
to keep that blind from them to get the problem first?
14:00
Or how do you go about using that? So when we go through
14:03
the methodology, because that's what we really teach and follow this step
14:07
by step approach. So first you have to say, what problem are we
14:09
trying to solve? And within phase one, the business understanding, we have
14:13
a series of different steps that you're supposed to be going through.
14:17
So one of them is the AI go/no go. So this talks about
14:20
business feasibility, data feasibility and implementation feasibility. So
14:24
do you have what is your ROI, the return on investment?
14:29
You can measure this a number of different ways. I always say that
14:31
ROI is money, time and resources. AI projects are not going to be
14:36
free. And you really have to understand that. Sometimes people just go,
14:39
well, we're just going to do this. And I'm like, yeah, but it's not, it costs a lot of money. And you measure that
14:45
however you want. Time is money. Resources is money. You only have a
14:50
finite amount of people that you can put on these projects.
14:54
Some organizations can have more than others, but still you have to be
14:57
mindful of that and so make sure that you understand the ROI that
15:01
you want. We go through a lot of reasons why AI projects fail,
15:05
and not having sufficient ROI is a failure. So the project may be
15:11
doing what it's supposed to, but an example that we give is Walmart
15:15
decided to have a autonomous bot that roamed the store floors and would
15:21
check to see if there were items that were out of stock.
15:24
Well, I just said that the autonomous pattern is a really hard pattern.
15:28
It's the hardest pattern. So it's able to autonomously navigate, and then
15:32
it had the recognition pattern because it's scanning the shelves to see
15:35
if inventory is out of stock or miss stocked. Well, what they could
15:40
have done is we always say, think big, start small, and iterate often.
15:44
So don't try and do everything all at once. Figure out what is
15:48
that problem you're trying to solve. Okay, you're trying to solve a problem
15:51
with inventory not being on the shelves. Well, maybe start with the aisle
15:56
that has the most need, not the entire store. And you already have
16:01
humans that are walking the floor. So maybe put a camera on the
16:04
shopping cart and say, okay, now, how is this going to solve that
16:08
actual return on investment? And was this really a problem that we needed
16:11
AI for? Could we have done it cheaper or quicker or better with
16:15
humans? Because we still need a human to go and actually restock the
16:19
shelves. We didn't have autonomous systems that were able to go and autonomously
16:23
restock the shelves. So they ended up scrapping that in favor of humans
16:28
because the return wasn't worth it. So did whatever they build work?
16:33
Yes. But was it still a failure because the investment was higher than
16:37
the return? Yes. I'm sorry, I've got to interject. That example is so
16:42
incredibly interesting because it also sounds like they had this learning
16:46
after building it. Whereas if someone had done their due diligence of like,
16:51
what does it cost for a person to walk the store for 20
16:53
minutes and check versus like the tech and the infrastructure and the data
16:57
and all the things we need to build this, you probably could have
17:01
answered that ROI question before you started the project, but do you feel
17:05
like most companies have to almost do it to learn it and then
17:08
they make the mistake and move on? Or is it... Tales of caution?
17:12
Yeah, like, are people good enough at figuring out this out before they
17:15
build it or is it only after? So a lot of people aren't
17:18
following that step by step approach. And when they're not, you can tell.
17:22
So Walmart is incredibly innovative. And they really push boundaries with
17:27
technology, but it's not always the right path forward. And so if you
17:31
go, okay, well, I don't have the resources of a Walmart. I don't
17:35
have the money that I can invest in some of these R&D projects
17:38
or putting out a pilot project. Another thing that we see,
17:43
another common reason for these failures is that we get into this proof
17:47
of concept trap and so we say, never do a proof of concept
17:51
because it actually proves nothing. You build it in a little sandbox environment.
17:54
It's usually the people that are most closely aligned with the project.
17:58
So they're going to be using it in the way that the tool
18:01
was intended to be used, not the way that humans actually are going
18:05
to use it out in the real world. And then data is messy.
18:10
Usually in a proof of concept, you have really nice clean data that
18:14
you're working with. And then you go out in the real world and
18:16
you're like, why didn't this work this way? Why are these users doing
18:20
things that I wasn't planning for? Why are you using it this way?
18:23
That's not how it was supposed to be used. And I was like, yeah, but that's how your users are using it. So we say,
18:29
get it out in a pilot and have it be in the real
18:32
world and see how it's being used. So if they had put this
18:35
out in a store or two and said, okay, this isn't working as
18:38
expected, this isn't providing the returns that we wanted, maybe we didn't
18:42
invest a ton of money, we invested some money and we're trying it
18:45
out, but it didn't work out as we planned and so it's not
18:48
worth scaling. So the verbiage of use case is like really common, a
18:53
lot of the clients that we work with, they have like their AI
18:56
use case that they toed around with them. And I feel like that
18:59
is not. I heard you say use case, but I feel like you're
19:03
using it differently. It almost feels like a use case is we want
19:07
an autonomous vehicle to go find the open spaces on the shelf,
19:12
not the problem framing that you're talking about. So how often is there
19:17
too much momentum down this path and this inertia of we have this
19:21
use case in mind, our OKRs are aligned to completion of this project
19:25
and so it's like really hard to turn the Titanic? Or you can
19:29
just talk about righting the ship. And if you think that that use
19:32
case language is in converse of the problem solution framing. Yeah,
19:39
and that's a tough question, because you sometimes have a application, an
19:45
AI application. You have something that you want to do and maybe a
19:49
senior manager or someone in leadership is saying that that's what they
19:53
want and you've already invested a lot of money, time and resources into
19:58
it. And so it's their little pet project. And to pull back from
20:03
it can be incredibly difficult. People also have those ideas in their mind
20:10
about what they want and they try and shoehorn it. And so you
20:14
go, well, I want an autonomous vehicle. So let's figure out how we
20:18
can get an autonomous vehicle on the store shelves. And when people talk
20:23
about use cases, case studies, I feel like those words get thrown around
20:27
a lot. And it's like, what exactly do you want with a case
20:30
study? How is that defined versus your use case versus what it is
20:35
that you want? So we always say figure out what problems you have. And
20:39
this requires brainstorming, this requires actually saying what problems
20:45
are we trying to solve? And write it down, and bring different groups
20:48
together and say, what are we trying to solve? And then from there,
20:53
when we talk about the patterns too, you can look at it from
20:56
one of two ways. You can either look at it as what's the
20:59
ROI that you want, and then figure out which pattern is best for
21:02
that. Or you say here's the pattern and then you figure out the
21:07
ROI. So when you say I want this pattern and then you figure
21:12
out the ROI, sometimes that's shoehorning because you're like, oh, well
21:14
that's an okay ROI, sure. But if you go, I want my organization
21:20
to have 247 care, customer support. Well then you go, okay,
21:25
well then, what's going to drive to that? And that would probably be
21:28
a chat bot, for example. So you go, okay, well then that's what
21:30
we should be doing. And if Walmart had said, what exactly are we
21:35
trying to do? And we're trying to stock shelves better and it's like,
21:39
well, what's the actual return? Drill down even further. Well, what is the
21:44
real return from that? Because you want more satisfied customers or because
21:48
you better inventory management or something like that, rather than just
21:52
saying, well, let's have something roaming the store shelves to say when
21:56
we're out of an item, maybe we should be fixing something with the supply
22:00
chain earlier on. Is that the biggest failure point you find?
22:05
Is the identify the problem part that we've been talking about?
22:09
Or is it oh, we can help 80% of clients that come to
22:14
us get past that point and then the biggest failure point of the
22:18
AI project is actually later on? There's 10 common reasons that we've identified
22:24
for project failure. Oh yeah. So one of it is running your AI
22:31
projects like a software application project. It's not, it's a data project.
22:36
You need data centric methodologies. You need to have a data first mindset.
22:40
Yeah. Then obviously, if data is the heart of AI, we're going to
22:45
have data quality and data quantity issues. How much data do you need?
22:49
I know a lot of times, especially with like analytics, we talk,
22:54
you can train on noise. More data isn't better. So you have to
22:59
say, what data do I need? And then, do we have access to
23:03
that data? Is it internal, is it external? Are we going to be
23:07
adding more data and then just feeding it more noise? I mean,
23:10
we have so many failure reasons. There was a, I think it was
23:16
a forest, maybe US Forestry, it was one of the government agencies,
23:20
and they were trying to count the number of wolves that were migrating
23:23
in a national park, which is a great use case. You put a
23:27
camera out and you can do the recognition pattern so that you're not
23:30
having humans who are there, which isn't really great and conducive to being
23:35
there for however long you're trying to track these wolves. So,
23:38
okay, that's a good use case. Well, what they realized was that it
23:42
ended up being a snow detector, not a wolf detector, because what it
23:45
was being trained on, because especially some of these deep learning,
23:50
for example, is a black box. So we don't know actually what it's
23:54
using to learn. And so they realized, they said, okay, well that's not
24:00
performing as expected. So then that's another common reason. Like I said,
24:05
proof of concept versus pilot. You're not putting it out in the real
24:07
world until you're investing all of this. I love that distinction. So good.
24:10
Yeah. And I cringe when people always talk about proof of concepts because
24:13
I'm like, I don't think you mean that. And I'm like,
24:17
you really mean a pilot. And if you don't, you should be meaning
24:19
a pilot. And then also a reason I talked about earlier, the number
24:26
one reason is over promising and under delivering. That's what brought us
24:30
to two previous AI winners, and it will bring us into another if
24:33
we continue to act that AI can do more than it actually can.
24:37
So the ROI part of this seems like it's very much tied to
24:41
this expectation setting. I'm really curious about this especially. I just
24:45
don't know how you even get a full team on board with this
24:48
type of thinking. Even if, let's say Walmart started with MVP of putting
24:53
the camera on the shopping cart, would they have been able to understand
24:57
the actual investments it would take to run with the full product versus
25:02
just the MVP? Or how does that play into the ROI conversation? Because
25:08
it seems like that's so tied into the expectations.
25:11
Yeah. And we don't do implementation. So I'm not there helping these organizations.
25:16
So I don't get to always hear through the entire conversation. But these
25:22
should be short, iterative sprints. And so we say, if you really need
25:26
to be mindful of what it is you're trying to solve,
25:29
make sure that you're not... You want to solve something big. So think
25:33
big, but then start small and then make sure that it's actually solving
25:37
a real problem. Another example that I like to use that I think
25:39
provides really good example of a positive return on investment is the US
25:46
postal service. They were, it was around the holidays and they were getting
25:50
a lot of calls to their call center, more than usual because it's
25:54
the holiday season. And so you think about, well, what's the number one
25:57
question that they get asked? Track my package. So they said,
26:01
we are not going to have a chatbot that can answer 10,000 questions.
26:04
We are going to have a chatbot that can answer one question,
26:07
track my package. So we can say, what is that return going to
26:11
be? Well, the return on investment is we want to reduce call center
26:14
volume because our call center agents can't handle the volume that they're
26:17
getting. They said, okay, we're going to have it answer that one question.
26:20
We can compare it to data that we've previously had. They said,
26:24
yes, this is a positive return. It is decreasing call center volume and
26:28
improving customer satisfaction because people can figure out where their
26:31
package is a lot quicker. From that they said this was a positive
26:34
use case. Now we can go to maybe the second most asked question
26:37
and then the third most asked question rather than saying, let me start
26:41
and answer 10,000 questions all at once, which a lot of people are
26:46
getting into trouble now because they just throw a chatbot on their website.
26:50
They're not testing it, they're not iterating on it, they're not making
26:53
sure that it's answering those questions correctly. And they're not thinking
26:56
big, but starting small. They're thinking big and then starting big.
27:00
So they're saying, I'm going to put a chatbot on my website that
27:02
can answer a bazillion different questions. And then it starts giving wrong
27:06
answers and then they get into a lot of trouble. We've seen this
27:08
with Air Canada, we've seen this with the city of New York.
27:11
I mean, we've seen this with Chevrolet dealerships that have chatbots on
27:15
their site. So like, I don't even need to make stories up.
27:18
It's like every day there's a new story about some failure.
27:21
But is that also, coming back to your point about, I was trying
27:24
to conceptualize the over promising point and it seems like that's intertwined
27:29
with this huge scope creep that then happens with many projects that it's
27:33
like, the scope becomes so wide and there's also this assumption that AI
27:38
can handle a big scope, but actually by doing that, you almost
27:43
burn the house down before you've even started building it. Yeah.
27:47
So over promising can be scope. And it also just, we over promise
27:53
what the technology is capable of doing. So we say it can do
27:58
all of these things and we're like, but it can't really. Or we're
28:02
trying to apply it in ways that it shouldn't be used.
28:06
So then it's not providing the answers that we want or that return
28:10
that we want. And then people go, well, now I'm frustrated,
28:14
it's not delivering on what we said it would. So we're not going
28:17
to use it anymore. And we go, yes, because if it doesn't fall
28:21
into one or more of the seven patterns. So another example is what
28:24
I did not say was a pattern of AI, is automation.
28:28
Automation is not intelligence. It's incredibly useful, but you're just
28:32
automating a repetitive task. And so we think about RPA technology and that's
28:38
incredibly useful, but it's not AI. And so sometimes people want to make
28:42
things more than they are. Or if we don't, if the technology isn't
28:46
there. So an example, back in the first wave of AI,
28:50
back in the 1950s and the 1960s, we wanted to have voice recognition,
28:54
and we wanted to have cockpits that were voice enabled so that pilots
28:58
didn't have to have all these switches and levers and they could just
29:01
talk. But we didn't... That technology wasn't where it is today and so
29:06
it wasn't ready. Right. So we had, we over promised on what we
29:09
could do and then under delivered because we didn't have what we needed.
29:12
And so we're even starting to hit some of that today which
29:16
we don't have machine reasoning. So we can't ask these systems to do
29:22
more than they really can. And if we don't understand those constraints,
29:26
this is where we run into issue. I am dying to dig into
29:30
something that you've alluded to twice, that a lot of AI is actually
29:36
a data problem. The reason I want to dig into this specifically is
29:39
I think there is a perception often in the industry that's a technology
29:45
problem that's solved with product managers and software engineers and that
29:49
sort of thing. How have you navigated that? 'cause like, we're three data
29:53
folks who probably appreciate the difference here and technologists in general
30:00
are amazingly smart, curious people. But there are still nuances to data
30:04
that are not fully appreciated. In the same way that I don't fully
30:07
appreciate the complexity of backend systems or front end code or things
30:10
like that. How do you navigate that in a business? Yeah,
30:14
we always say it's people, process and technology. This three legged stool.
30:18
And the easiest thing to do is to fix the technology.
30:24
Fix, I air quote that. So you just add a new technology or
30:28
you add a new vendor because it's the easiest, because you can buy
30:32
it. And it's something that people feel is within their control,
30:35
but it doesn't actually fix the problem. And then process, that's harder
30:41
to fix. And so we need to say okay, maybe the way that
30:44
we're doing it, we can be agile, but we shouldn't follow agile from
30:51
that software development angle. We need to follow data centric methodologies.
30:55
And that's also people. And so it's really important to understand that
31:01
these are data projects and data, the issue, which, I don't know, maybe
31:06
I'm saying something controversial here, but data isn't sexy. And so people
31:10
don't want to talk about it. And people that are in data fields
31:15
love data, but other people don't necessarily, and they think it's a solved
31:19
problem. And I'm like, it's not a solved problem and it will never
31:22
be a solved problem. Yes. Exactly. Because the more data we create,
31:25
the more issues we're going to have. And so people just want to
31:29
throw technology at it. Oh, Tim's going to be so sad he was
31:31
not on this. He's going to listen to this later and literally be
31:34
fist pumping in the air and be like, yes, yes. I keep being
31:37
like, Tim's smiling somewhere in the world right now at multiple points
31:40
and he doesn't know why. He's just like, oh.
31:43
This warmth has come over me. Okay, so something that I've been thinking
31:49
about ever since you talked a little bit about the example,
31:51
Kathleen, is the postal service example about the chatbot answering that
31:56
most popular question. So if the ROI proves itself for that single question,
32:03
are any other subsequent use cases solving problems just gravy on top?
32:08
Because if you were to try, just because it worked for that first
32:10
one doesn't mean it's going to be appropriate for the second. Or maybe not for the third. Or perhaps it would have to pull
32:15
in another pattern which expands the scope. So is it
32:19
a freeing place to be after you've come up ROI positive on one
32:23
first use case? Because then you have a different proof point for a
32:28
second use case. Because if it doesn't work out, you're like,
32:30
nope, we're still good. Track my package. We can explore use case number
32:34
three, but we're going to go ahead and happily depart from investing further
32:37
in use case two as an example. Is that mental model the way
32:40
of building on that accurate? I'm curious your thoughts. Yeah, I mean, every
32:46
use case, every example, every organization is going to be different.
32:52
And so you have to say, what really is that ROI?
32:55
Because if the ROI is to reduce call center volume, then maybe it
32:58
shouldn't be the most second asked question. It should be the most second
33:03
asked question that the call center gets. And is AI the right solution
33:07
for it? I don't know. Depends on what it is. Because maybe if
33:10
it's... I need locations of different post offices, you can just have it
33:17
direct to a point on the website. It depends on what exactly those
33:22
questions are. But yeah, but to just really drill down. And then when
33:26
you get to a point that you're like, this is good, we always say, AI isn't set it and forget it. So you have
33:30
to make sure that it continues to perform as expected. And so think
33:33
about what that means for the resources at the end of that iteration. But
33:39
you don't always need to continue and continue and continue and try and
33:43
make it more efficient and try and make it better and try and
33:45
have it answer all these different things. Because that's where people do
33:48
get into trouble, and they start doing things that maybe have a negative
33:53
ROI where it used to have a positive ROI. Or they could have
33:57
done a different use case or a different example, a different project. You
34:02
want to have those quick wins. So we always say, think about what
34:05
is the smallest thing that you can do that's going to show a
34:10
positive win. Because obviously you're not going to get investment for further
34:14
projects if you're showing negative wins, negative returns. So what could
34:20
continue to be those positive wins? And then at some point you're like,
34:24
okay, we've done a lot with this, let's move on to our next
34:28
project. Or how can we add a different pattern into this?
34:31
Or how can we do something different? But you do want to always
34:34
be thinking about that and saying, and that's why we always say,
34:38
come back to this methodology where it's six steps and it is iterative.
34:42
So if you're not ready. So we start with business understanding what problem
34:45
are we trying to solve. Then we move to data understanding.
34:47
We need to understand our data. We need to understand if it's,
34:51
do we have access to this data? Is it internal, is it external,
34:54
what type of data is it? And then from there we go to
34:57
data cleaning. So because again, we know that data is not going to
35:01
be nice and clean, and we need to do things like dedupe it
35:05
or normalize the data or whatever it is in that next phase.
35:11
Then from there then we can actually build the model, then we test
35:15
the model and then we put the model out into the real world,
35:17
which we call operationalization. So that would be that one question is
35:21
one phase of the chatbot. So then we come back and we say,
35:24
okay, now let's figure out the next problem that we're trying to solve
35:28
and do we have the data for that? I really like the fact
35:32
that you asked that, Val, because it's giving me a light bulb moment
35:35
of I have a coworker, Nick, who always says we're not here looking
35:39
for local maxima. And I feel like that's exactly what you're saying,
35:43
Kathleen, is you prove ROI on that use case. But then you have
35:48
to pick your head up and say, now what is our highest priority
35:51
problem? Was that ROI enough to maybe make the problem of that huge
35:55
volume coming in asking to track packages? Not our top business problem
35:59
where we need to take these people's resources, time, brain power for AI
36:03
solutions to keep pointing it in the same direction. Maybe this is where
36:07
we pivot to get the most ROI. Instead of saying, we started AI
36:10
here on the chatbot, we must continue on the chatbot. I'm telling you,
36:15
there's a company that has this exact work stream where there's the chatbot
36:20
AI roadmap. And they are going to run that down versus the reorientation,
36:26
like exactly what you're talking about, Julie and Kathleen, about the next
36:29
biggest problem which might have nothing to do with the chatbot or track
36:33
my package. Yeah, I like that a lot too. Oh, I love that.. Not
36:38
looking for local maxima or something. Like, I just, I love the phrase.
36:42
Oh, see, I just always talk about diminishing returns. I feel like that's
36:46
equivalent. Yeah. But sorry, people, we are running out of time and I
36:52
have so many questions for Kathleen. I am dying to talk about skill
36:56
set. In your experience, people that are project managing with AI,
37:01
is it a different skill set? Is this the same skill set as
37:04
anyone doing project management or even the team that are involved, what
37:09
are the things that make the team possibly more successful? That's a great
37:12
question. So when we talk about AI and project management, we talk about
37:16
it from two angles. A lot of people are talking about what are
37:19
the tools I can use to help me do my job better?
37:22
And that's where a lot of like 95% of conversations are.
37:26
And there's so many tools. And people always ask me, well,
37:28
what's the best tool? And I go, I don't know. What are you
37:31
trying to do? There's so many different tools. I can't say there's no
37:35
one tool that's best. But then how do we run and manage AI
37:39
projects? And that's where CPMAI comes into play. So what we found is
37:43
that when we're looking at running and managing AI projects, we get those
37:48
traditional project professionals. They're a project manager, maybe a product
37:52
program manager, but then we also get project adjacent. So they're a data
37:57
scientist or they're a data engineer and they've been tasked with running
38:02
this project. So the skill sets really are unique and varied when it
38:06
comes to running and managing AI projects, not typically
38:10
always that traditional project manager skill set. And they're usually a
38:14
little bit farther along in their career as well. So we found that this
38:18
complements very nicely with PMP, so for example. A lot of people that
38:23
get CPMAI certified are also project management professionals with PMP certification.
38:27
They're a little bit farther along in their career. Doesn't mean that you
38:30
can't run and manage AI projects early on in your career,
38:33
but it does... We do find that they tend to be a little
38:37
bit more mid to senior in their career. That's interesting. I wonder if
38:41
that's also because so many of the things that I've heard you talk
38:44
about, both on your own podcast and today,
38:48
it actually requires really deep understanding of the business and the strategy
38:54
and asking the right questions. And I feel like typically those are the
38:57
skill sets that people get better at with time. I mean,
39:01
I have some amazing junior people in my team that are naturally just
39:03
very good at that. But I do find it tends to be,
39:06
you need to have a bit of experience under your belt. So I wonder if that's part of the allure or if it's just
39:10
like people need to, are more willing to take some risks.
39:14
I think it's because they know the industry, they know the real problems,
39:18
the real pain points. And then they're now solving for that.
39:23
And so AI is going to become a part of more and more
39:29
projects as well. So we may see a shift over time and everybody
39:32
needs to be an AI project manager because they're going to be involved
39:36
in more projects. But what we've seen so far is that it tends
39:40
to be on the, a little bit later in their career, not super
39:45
early in their career. Because you need to have some of that industry
39:48
knowledge. I mean, even thinking about ROI. What's the return that you're
39:51
looking for at that organization? If you're new to the industry,
39:54
you may not know some of those real pain points.
39:58
And I know at PMI you've talked previously about power skills.
40:02
Can you tell us a bit more about that? Yeah, sure.
40:06
So at PMI we call soft skills power skills. And I think that
40:10
this conversation is incredibly important. So even on AI Today podcast we've
40:13
talked about this and I've written articles in Forbes about this.
40:16
When we think about how we've taught in previous years, and what we
40:21
focus on with school and academics in K 12, it's been a lot
40:25
of STEM, so science and technology and engineering and math. Some of those
40:30
types of skills. And they're great skills to have. But we also need
40:35
to be thinking about creative thinking and critical thinking and collaboration
40:41
and communication. And so now that generative AI has put AI into the
40:45
hands of everybody, we need to really think hard about what it is
40:51
that those outputs are, and how we use them. So I always like
40:54
to think about this as two sides. So how do I use my
40:58
power skills to be better with large language models and generative AI?
41:04
How do I become a better prompter because of that? And how do
41:06
I take the results? How do I use generative AI to help me
41:10
with my power skills? So how do I use it to help me
41:13
be a better communicator? Maybe it can write emails in different tones that
41:17
I struggle with, or maybe it can help me with translation in ways
41:23
that I couldn't before. Or how does it help me brainstorm?
41:26
How does it help me bring teams together and have those collaborative sessions?
41:31
But then at the same time, how do I take my critical thinking
41:33
skills and say, was this a correct output? Maybe I shouldn't trust it. Let
41:40
me, what is it, trust but verify? Always think about what it is
41:44
that's coming out. Because we know that they can hallucinate. We know that
41:47
means that it can give results that act like it's, it's confidently wrong.
41:52
Well, okay, let me do a little bit of critical thinking here and
41:55
saying, okay, maybe drill down one level deeper. Or how can I have
41:59
better communication skills with it and do a follow up prompt or write
42:04
it a little bit differently or have it help me rewrite and tailor
42:07
even more finely the results that it's given? And so I think it's
42:12
really important to use those power skills and not take them for granted.
42:17
I also am really interested to see the shift now in learning with, sometimes
42:23
people get a very negative reaction to AI and they go,
42:28
oh, it's going to, students are going to be cheating with this or
42:31
whatever. And so they just have this do not use policy.
42:34
But of course people are going to use it. And even organizations,
42:37
if they don't really know how to manage this, they'll go,
42:39
well, you're not allowed to use it internally. Well, guess what?
42:42
They're all using it on their personal devices and it's probably way worse
42:46
because there's data leakage and there's security issues that are going
42:49
on and the organization can't control that. So we say, don't fight the
42:53
technology, but really lean into it and let's all use it in that
42:58
trustworthy, ethical, responsible way and not fight it, because it is going
43:03
to be here. So how do we now teach children these power skills
43:07
and help use the AI technology to help them be better at communication
43:13
or collaboration or critical thinking or creativity or whatever
43:18
that power skill is that you all like and want to think about.
43:23
I always think about critical thinking. I think that that's such an important
43:26
and usually underrated, under discussed scale. We are all just
43:34
clicking our fingers in agreement. Do you think critical thinking can be...
43:39
It's a very controversial question that I have been wrestling with for my
43:42
whole career. Do you think critical thinking can be taught or do you
43:46
think some people naturally are better at critical thinking than others?
43:51
So I think anything can be taught, but I think that some things
43:54
come more naturally to people. So you may not be a great
43:59
communicator, for example. You may struggle to find words, but if you use
44:04
a large language model, it can help you become a better communicator.
44:08
Same thing with critical thinking, but it's something that is like a reflex.
44:12
And so you need to really embrace that. And I think that leaders
44:16
on teams, colleagues can really help. And that's something that everybody
44:21
needs to be thinking about and really feel safe and empowered to have
44:24
that critical thinking and say, I understand that's what you said,
44:28
but what did you mean? Or I understand that's what you said,
44:31
but let's drill down one level deeper. And that's how you really get
44:34
that critical thinking. And I've been trying hard to teach it to my
44:38
children. I have two young kids. And then I also think about how
44:41
do I apply this? And this is so incredibly important because now in
44:46
the age of AI, there's a lot of misinformation, disinformation. We say you
44:51
can no longer believe what you see, hear or read. So how do
44:54
you say was, did this come from a source that I can trust
45:00
or should I be questioning this? And okay, so an example out there
45:05
is there's a stat that Elon Musk is the richest man in the
45:08
world, and he has like 44 or 48 billion dollars, and there's 8
45:13
billion people in the world. So if he gave each person a billion
45:15
dollars, he'd still have $40 billion. And I'm like, that math ain't mathing.
45:19
But people are circulating it like it's the truth. And even one of
45:21
my friends sent it to me, and then I told him,
45:24
I was like, wait a second, this isn't right. And I said to
45:27
my husband, I go, what is this? And he's like, this is ridiculous.
45:30
But people aren't right because we're in such a go, go, go world. And
45:35
you need to understand where this is coming from. People just hear something
45:39
from the internet, believe it, even though we say, don't believe it,
45:42
and then they regurgitate it like it's an actual stat. And I'm like,
45:46
please stop. That's critical thinking, just because you hear something doesn't
45:50
mean that it's the truth. So maybe do math and say,
45:53
okay, that math isn't mathing, or figure out where it came from.
45:57
And it gets harder because AI is prevalent. And so that's why critical
46:03
thinking is really now critical. Okay, I'm going to ask one last question,
46:10
just because that's what I like to do.
46:13
I was looking at some research the other day, and I feel like
46:17
we are so in the thick of AI from the technology perspective, we're
46:22
all living and breathing it. But it does seem that there are these
46:25
huge sections of society that have such a different experience.
46:31
And a lot of it is that the wider public can be quite
46:35
apprehensive about AI and that if you're trying to market a new feature
46:39
or product or whatever, potentially you don't even want to mention that
46:42
it's AI. And I was a bit surprised by that. And I was
46:45
going through San Francisco a couple of months back, and I was blown
46:49
away because every single ad was talking about AI. And I was like,
46:53
I don't get this. Why do all the ads reference AI?
46:57
And of course, I started chatting to people about it, and they're like,
46:59
because it's San Francisco. It's because people want to use it to attract
47:02
talent. And, like, look how shiny we are. We're doing the cool thing,
47:05
but that's not necessarily the same as what the customers want.
47:09
Is that a tension that you've noticed? Like, I don't know,
47:12
companies have to package it up and maybe not fully show the like
47:17
what' and all of, that this is AI solving your problem.
47:21
Yeah, So I think it's, I like how you brought that up because
47:26
San Francisco is Silicon Valley. So they're very tech forward and tech leaning.
47:31
And a lot of this is coming from there. So of course they're
47:34
going to be pushing that. And that that landscape does look different than
47:37
other parts of the country or the globe.
47:40
You also have to think about what industry you're in and some industries
47:44
are embracing AI a lot more than others. That's like a heavy technology.
47:52
And probably most of those ads were heavy in tech. And you think
47:55
about all of the tech companies that are from there. But then there's
47:58
other industries that are not as forward leaning with AI even if they're
48:04
using it. And that's for a number of different reasons. Like healthcare,
48:07
there's a lot of applications that could be used but aren't always used
48:13
or are used, what we call augmented intelligence, where it's not replacing
48:17
the human but helping them do their job better for a variety of
48:20
different reasons. You can't have AI systems diagnose patients. So they
48:26
can provide a diagnosis, but then the doctor needs to actually provide that,
48:30
at least in the States in very limited use cases can you actually
48:34
have have an AI system diagnose a patient. Construction also is an industry
48:40
that is not a heavy adopter of AI. Yes, of course there's applications
48:45
for it, especially when you think about work, job sites, the recognition
48:50
pattern is being used to make sure that people are either not on
48:53
the site when they're not supposed to be. So keeping that watchful eye
48:57
over it, or for safety reasons, making sure that they have on protective
49:01
gear, hard hats, and IT can monitor it in real time and then
49:03
you can fix it in real time so that you can prevent injury.
49:07
And so I think that it depends on the industry. And also there's
49:10
a lot of fears and concerns when it comes to AI that we
49:14
don't feel with other technologies. I don't think people fear mobile technology,
49:19
for example, as much as AI. And this comes from a variety of
49:22
different reasons. Science fiction, Hollywood. We conjure up all these different
49:27
ideas of what good and bad AI can do. We think about HAL
49:32
or the Terminator or Rosie from the Jetsons, and we don't have this
49:37
when it comes to other technologies. So people have real fears which are
49:42
emotional and concerns which are more rational, and we need to be addressing
49:46
that. So messaging plays a part in all of that. And I think
49:50
that it depends on the industry, it depends on the user use case.
49:54
And so we shouldn't hide necessarily that we're using AI, but we don't
49:58
always need to be so forward leaning if the industry isn't quite ready
50:02
to embrace it. Thank you so much, Kathleen. That was such an incredible
50:06
place to end. Yeah, I think we're all blown away. We're going to have
50:08
to do a part two at some point if we can drag you back. But we do like to end the show with something called Last
50:14
Calls where we go around and share something interesting we've read or come
50:17
across or an event that's coming up. You're a guest. Is there something
50:20
you'd like to share with the audience today? Sure. I mean,
50:23
obviously AI Today podcast. I think it's wonderful. It's been going on now
50:28
eight seasons and we're in the middle of a use case series.
50:30
So if people want to see how AI is being applied in a
50:33
number of different industries, then definitely check that out. And also
50:37
one event, I've been an Interactive Awards judge for south by Southwest
50:41
for a whole decade now. I can't believe.
50:43
I know. And I'm going back, so really excited for that.
50:47
And PMI is going to have a presence there, and so I'll be
50:50
on a panel discussion. So I think that that's pretty exciting. Yeah. I
50:55
can talk AI all day, every day. So I'll be
50:58
a judge at the Interactive Awards live. So it's March 8th when the
51:04
judging happens and then my panel will be a day or two later.
51:07
Nice. Thank you. Very cool. Julie, what about you? So I'm pretty proud,
51:13
this week I finally tried out making a gem in Gemini and I
51:18
don't know if any of you guys have tried it, but I was really proud. It was just one of those things on my to do
51:21
list. I'm like, I want to play with it, I want to do it. I kept putting it off. I didn't find time.
51:26
Was at work and found a great use case for it.
51:29
And so I finally took the time to do my pre prompting.
51:33
And actually part of what I wanted to call out here was that
51:35
I finally understood what it was doing. I would hear everyone at work
51:39
say, I set up a gem, I'm recreating myself. It's the coolest thing
51:42
ever. It can do so many things for me. And I'm like,
51:45
whoa, okay, I'm intimidated, but it sounds awesome. So when I sat down
51:49
to do it with some of my colleagues, they were explaining to me
51:52
that what it's doing is you're pre prompting Gemini, and so you get
51:57
to save all this information. So, for example, I said the role I'm
52:01
playing is a consultant in analytics and experimentation. This is my title.
52:06
Here's my LinkedIn, here's what I focus on. Please use with every answer
52:11
the context of these couple documents that I gave it. And in those
52:15
documents I was able to give it a lot of
52:18
slideware and other documents I've created in the past of saying,
52:23
like, this is the topic I want you to reference when I'm asking
52:27
you these types of questions. And so once I really understood that it
52:30
wasn't magic, you weren't giving it just a subset of data,
52:33
you were pre prompting the model, it was like it finally really clicked.
52:37
And I tried it out today. I said, I'm trying to spin up
52:41
this specific thought leadership group. I gave it a few sentences of things
52:45
I had brainstormed. I gave it to my gem, who I named Juniper.
52:49
And I'm embarrassed to say. I literally went to ChatGPT and was like,
52:53
what are fun names for gems? Because I was not feeling creative that
52:56
day. Stop it. No you didn't. That's. Yeah. Anyway, so Juniper,
53:02
I asked Juniper for this. It gave me like a two page outline
53:06
for the whole charter of the group. And
53:09
it was like a little broad, like I'll take it and change it.
53:11
But yeah, I very impressed by this gem. So something fun to go
53:16
try. It was less intimidating than I thought. Very nice. I like that.
53:20
Over to you, Val. So mine is a twofer, but they're actually related,
53:24
and it's actually more related to this conversation than I was originally
53:28
even anticipating, which I love. So first of two is a medium article
53:33
called Thinking in Maximums Escaping the Tyranny of Incrementalism and Product
53:38
Building. And it's all about the local versus versus global maximum.
53:41
It goes through all these use cases of like, why MVP thinking is
53:45
actually problematic in some cases. And all these stories of companies that
53:50
actually swung big and why that's so much better than taking it down
53:54
to the smallest bolts of the product and getting feedback and
53:59
not really being tied to the full vision, which I'm just call out.
54:03
I'm not sure I agree with all this, I just find it interesting.
54:05
And then I was also listening to a podcast from the product school.
54:10
They were interviewing the CPO of Instacart. And one of the call out
54:14
quotes from that was, you won't hear me say or use the word
54:17
MVP because I find it to be very reductive. And I think that
54:21
product is so much bigger than that. So anyways, I'm just,
54:24
I've been doing some research around, is this a theme in the product
54:27
world and product space and how they're thinking about this? Because obviously
54:31
as someone who has an experimentation background, I'm very much a fan of
54:35
de risking and think big, start small, Kathleen, which I love that.
54:39
So. But two interesting reads from very different POVs than where I stand
54:44
and thinking about how to break down and think about the work and
54:47
de risking choices as you're moving along the process. So two good ones
54:52
there. Those are good. I can't wait to read those.
54:55
Yeah. And how about you, Moe? I have nothing to do with the
54:58
show. Mine are just too fun. Well, one is Canva Create is coming
55:03
up next month, April 10th in Hollywood Park, Los Angeles, which I am
55:09
super excited about. It's just, yeah, really fun atmosphere and we always
55:13
have some incredible speakers. So super pumped about that one. The fun one,
55:18
so I had a session yesterday with my mentee and she started talking
55:22
about Gretchen Rubin, and she's like the four tendencies and blah,
55:26
blah, blah. And I was like, this sounds really familiar. And then I
55:28
realized I'd listened to a podcast on it, but the podcast was applying
55:32
the four tendencies to children, and how you raise your children.
55:37
And then I'd never actually gone back and read the total work of
55:41
Gretchen. And so we had a really interesting conversation about it.
55:44
It basically talks about, whether you're an upholder, an obliger, a rebel,
55:48
or a questioner. And it's basically to do with where your motivation comes
55:52
from, if it's an internal motivation, external, both, etcetera. And the
55:56
thing that blew me away is that I had listened to it and
55:59
been like, this is the one that I am. And then as I
56:03
was talking about it more and more, I was like, oh,
56:05
I'm a different one. And then I did the quiz and I was like, I'm actually a completely different one to what I thought.
56:11
So that was like a really big eye opener because, yeah,
56:15
I've been thinking a lot about my own motivations and how I can
56:18
get the best out of myself. And life and balance and all of
56:21
these things. So it was actually also just like a really nice way
56:23
to break up my day. So I'm going to have, my poor team
56:26
don't know it yet, but I'm going to ask them all to do the calls because I'm so interested to see what everyone is.
56:32
So, yeah, those are my two last calls. Just to wrap up,
56:34
I want to say a massive thank you, Kathleen. This was just phenomenal.
56:38
We have not even touched the sides of all of the possible directions
56:42
that we could have discussed with you. But a very big thank you
56:45
for coming on the show today. Yeah, thank you for having me.
56:48
This was such a wonderful discussion. And we can't end without saying a
56:51
big thanks also to our producer, Josh Crowhurst and all of our wonderful
56:55
listeners out there. If you have a moment, we'd love if you could
56:59
drop us a review on your favorite podcast platform.
57:03
And I know I speak for Val, Julie and myself, no matter how
57:08
many problems you're solving with AI this year, keep analyzing.
57:15
Thanks for listening. Let's keep the conversation going with your comments,
57:18
suggestions and questions on Twitter @analyticshour, on the web at analyticshour.io,
57:25
our LinkedIn group and the measured chat Slack group. Music for the podcast
57:30
by Josh Crowhurst. So smart guys wanted to fit in. So they made
57:35
up a term called analytics. Analytics don't work.
57:39
Do the analytics say go for it no matter who's going for it?
57:42
So if you and I run the field, the analytics say go for
57:45
it, it's the stupidest, laziest, lamest thing I've ever heard for reasoning
57:51
in competition. Quick before you drop, Kathleen, when you were talking about
57:59
the communication skills helping with the way you communicate, informing
58:04
the prompt engineering and even what you're talking about Julie?
58:07
ChatGPT did me dirty. So you know how it shows all... It's essentially
58:11
showing your search history in that left rail unless you hide it.
58:17
I went back to end of 2023, and half of what...
58:21
My responses were, give me three more. Give me three more.
58:25
And that was, I was giving it no more direction or information.
58:31
I was like, give me five more. Make it funny. Give me five
58:34
more. It was like all I said to... It to be fair,
58:37
sometimes I say do better. I was like, no additional information.
58:44
Just try harder. Rock Flag and AI is a data problem.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More