Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Today on the AI Daily Brief, the
0:02
most in -demand agent use cases right
0:04
now. The AI Daily Brief is a
0:06
daily podcast and video about the most important news
0:08
and discussions in AI. To join the
0:10
conversation, follow the Discord link in our show notes. Hello,
0:18
friends. Spring Break Week continues with
0:20
another interview episode, and this time I
0:22
am once again joined by Neufar Gaspar to
0:24
discuss the agent use cases that we're
0:26
seeing come up most often. Nufar
0:29
is once again a brilliant AI analyst, product
0:31
and strategy leader who has consulted with
0:33
some of the biggest companies in the world
0:35
on AI strategy and who works with
0:37
us at Super Intelligent on our agent readiness
0:39
audits and our agent marketplace. Today
0:41
we're looking at some of the agent use
0:43
cases that are most in demand coming out
0:45
of both our audits and our agent marketplace
0:47
as a way to potentially help you understand
0:49
what agent uses are actually ready for prime
0:51
time and which still remain a little bit
0:53
farther away. All
0:56
right, Neufar, welcome back to the show. This
0:58
is not exactly a part two from
1:00
what we did before, but I think a
1:02
lot of people will connect the dots.
1:04
Before we were talking about what mistakes are
1:06
common that we're commonly seeing among organizations
1:08
as relates to AI broadly, but agents specifically.
1:11
Today we're talking about what agent use
1:13
cases are actually ready for prime time,
1:15
what agent use cases are being implemented
1:17
right now and people are finding success
1:19
with. And again, this comes back and
1:21
hearkens to sort of a theme from
1:23
that previous show as well, which is
1:25
how can we help organizations spend their time
1:27
more effectively in this in
1:29
this agent and AI transformation. So
1:32
let's dive in. We're going to talk
1:34
about some use cases specifically, obviously. But
1:36
I think that where we wanted to
1:38
start is talking about based on our
1:40
set of experiences, which includes, you know,
1:42
a huge number of conversations with different
1:44
types of companies, both in the context
1:46
of the super intelligent work with with
1:48
these agent readiness audits, but also in
1:50
terms of your independent practice and consulting.
1:52
What are we seeing on average at
1:54
the risk of maybe being overly reductive.
1:56
Where are orgs with agents on average,
1:58
at least in franchised organizations that we're
2:00
tending to deal with? Yeah. So
2:03
let's assume that there is some
2:05
bias because if they're very, very mature,
2:07
perhaps they will not come to
2:09
us. But everything that we've seen thus
2:11
far are companies that we categorize
2:13
as either agent initiation or agent exploration
2:15
phase, meaning that they're either just
2:17
starting to contemplate agents or maybe they've
2:20
started working on a handful of
2:22
agents or some agents ideas. But in
2:24
general, they're very early stages. And
2:26
like we talked about the seven common
2:28
mistakes, in many cases, there are
2:30
some are not ready that they have
2:32
a lot of work to do in
2:34
preparation for agents even before they
2:36
can introduce the first one. So they're
2:39
very, very early on. We are
2:41
seeing some organizations and we actually
2:43
encourage them in some cases that almost
2:45
have no AI adoption but are
2:47
looking into agents as the way for
2:49
them to bridge the gap of
2:51
how much behind they are. In
2:53
some cases, these are smaller organizations
2:55
where it makes more sense for them
2:58
to hire an agent versus hire
3:00
a human employee. And in other cases,
3:02
they will still have to do
3:04
the groundwork of getting agent ready before
3:06
they will be able to do
3:08
this bridging the gap with agents. Yeah,
3:11
they want to talk about also the advance. Absolutely.
3:13
Yeah, so I think maybe
3:15
a better way to frame this even than just
3:17
where orgs are on average is sort of what's the
3:19
band of organizations that we're seeing, the
3:21
common band from beginner to sort
3:24
of a little bit more advanced.
3:26
Yeah. So from what we've seen,
3:28
even the most advanced organizations, one
3:30
that already have agents in production,
3:32
we're only talking about, in many
3:34
cases, some handful of meaningful agents
3:36
in production. And I'm not talking
3:39
about the personal productivity that individuals
3:41
are creating for themselves. These are
3:43
not in my book, Meaningful Agents
3:45
in Production. And I'm not also
3:47
going to discuss the question of
3:49
whether they're custom GPT or other
3:51
such assistance or agents. Let's negate them
3:53
from the discussion. They do
3:55
have in production in many cases, off
3:57
-the -shelf kind of agents that they
4:00
built with vendors like AgentForce, Microsoft,
4:02
and others. The other cases
4:04
that we're seeing, and we'll talk more
4:06
about the actual use cases, but of
4:08
course customer support is by far the
4:10
most predominant category of where agents are
4:12
actually mature in production, and
4:14
a handful in some cases
4:16
of other supporting agents in
4:18
some very well -focused functions within
4:20
the company. So I
4:23
think that that bridges this into
4:25
maybe where organizations are not. So the
4:27
second thing that we wanted to
4:29
explore is, again, set up to what
4:31
agents are ready for prime time.
4:33
There's some pretty distinct patterns in what
4:35
we're not seeing. I was on
4:37
a conversation with a group of chief
4:39
AI officers a couple of weeks
4:41
ago at this point, a
4:43
number from the finance industry, some
4:45
from pharmaceuticals. pretty big range of
4:47
different companies. And there was one
4:49
thing that was really clear. And
4:51
these are definitely more advanced, or
4:53
at least these are the most advanced
4:55
portions of their organizations. Maybe the organization
4:58
as a whole is in advance, but
5:00
these are people who are highly engaged,
5:02
highly enfranchised, really thinking about these things
5:04
all the time. They're the internal champions.
5:06
And it's very clear that where they
5:08
want to get eventually is agents involved
5:10
in core business
5:13
functioning, right? If they are
5:15
in insurance, they want
5:17
agents to actually be making
5:19
decisions beyond just sort
5:21
of providing algorithmic advice like
5:23
predictive analytics have done.
5:26
They want to reorganize their
5:28
whole companies around agent
5:30
and capacities. However, to
5:32
a person, none of them
5:34
really feel like agents
5:37
have the complexity, sophistication, duration
5:39
capability, to be
5:41
used for those specific purpose -built use
5:43
cases that are the very, very
5:46
core to their company. And so instead,
5:48
it seems like the place that
5:50
they're going is focusing on not things
5:52
that are unimportant, but just other
5:54
parts of the functioning of the company
5:56
that are, call it lower risk,
5:59
right? That still allow them to get
6:01
used to integrating agents into workflows, but
6:03
are on things like customer service,
6:05
marketing, sales. Is that something that you're
6:07
seeing as well? Yes,
6:09
but I'm not sure about the observation that the
6:11
technology is not ready for what they have
6:13
in mind. I think in many cases, the organization
6:15
is not ready. When it's from
6:18
culture, technology skills use
6:20
cases. In some cases,
6:22
they don't want to tackle
6:24
the most contradictory thing that
6:26
they can do to get
6:28
their employees basically to create
6:30
an uproar because they're seeing
6:32
the future. So that might
6:34
be another thing. There is also a
6:37
fear to take your core business.
6:39
offload it to AI because of
6:41
the potential pitfalls beyond the
6:43
employee. And then lastly,
6:45
technology perhaps is not ready in some cases,
6:47
but I'm not sure whether in all cases
6:49
this is indeed the case that the technology
6:51
is the biggest hurdle. Sure. So
6:53
basically it sounds like you're seeing
6:55
the same sort of inclination towards,
6:57
you know, orthogonal use cases rather
6:59
than, you know, core business function
7:01
use cases. You're just, you're not
7:04
sure that that's more technology or
7:06
the organization itself or some combination
7:08
thereof. Yeah. And in
7:10
many cases, we're saying that other
7:12
people are doing what other people
7:14
are doing. So there is like
7:16
a momentum here of automating the
7:18
support function. First, I think from
7:20
a business perspective, it makes sense
7:23
to dip your feet in the
7:25
water where it's more safe from
7:27
various perspectives and then go there.
7:29
Just yesterday, we had a conversation
7:31
with a company that is very
7:33
bold in their agentic approach, but
7:35
they're saying, let's get the efficiency
7:37
first. off the table and put
7:39
all of these agents that can free
7:42
some of the bandwidths of our employees.
7:44
And then let's tackle the core business
7:46
with agents, not because we think that
7:48
we can, but because we want to
7:50
have our employees have more bandwidths
7:52
to think about those core agents before
7:54
we dive head deep with those. Yep.
7:57
No, it totally makes sense. So, okay,
7:59
then let's move to the meat of
8:01
this conversation, which is what things people
8:03
are doing right now? What are we
8:05
seeing most commonly in terms of the
8:07
agentic use cases that are being deployed,
8:09
that are ready for production, that are
8:11
actually yielding results for companies? Yeah,
8:13
so I can name a few and
8:15
then add whatever I'm missing. But one
8:18
thing that is very straightforward and easy
8:20
to use is basically to use agents
8:22
that others have built. What are those
8:24
are agents for coding that are probably
8:26
one of the most mature, I think
8:28
literally every coding platform now offer an
8:30
agent. Some of them are better than
8:32
others. Some of them are agent native
8:35
versus others that are just introducing agent
8:37
almost as an afterthought. But those are
8:39
getting some good momentum and some positive
8:41
feedback. my
8:43
personal favorite deep research agents that
8:45
are doing amazing job and
8:47
we're also seeing some companies basically
8:49
creating their own version of
8:51
deep research so that it can
8:53
work internally or in their
8:55
own terms and these are some
8:57
very good use cases. The
8:59
other probably simplest thing that people
9:01
are doing is augmenting the
9:04
classical like Zapier or make automations
9:06
with more agentic capabilities, whether
9:08
it's for planning, some open -ended
9:10
tasks that currently agents can do
9:12
and beforehand they couldn't, or
9:14
augmenting them with more like
9:16
NLP -based interactions of text
9:18
and speech, but just adding
9:20
them to the existing flows
9:22
concretely or metaphorically by creating
9:24
similar automations using other tools. Let's
9:27
actually pause I want I want to break
9:29
these apart a little bit because there's there's
9:31
a lot to dig into here So let's
9:33
talk about the augmented automation a little bit
9:36
because they're frankly the least interesting of these
9:38
things to me They're sort of
9:40
the they're a very obvious starting point,
9:42
but they are this is the
9:44
area where people love to debate,
9:46
are these things really agents or not? Like, what
9:48
should we call automations? What should we call agents?
9:50
I'm well on the record with this one that
9:52
I think people should give up the ghost
9:54
and agents are close enough and actually people's understanding
9:56
of that term is directionally correct and they should
9:58
just be fine with it. But it feels
10:01
like this is an area
10:03
where there are certain types of
10:05
tasks that are just, they're
10:07
so begging to be automated more.
10:09
And it's really just figuring
10:11
out these sort of very slight
10:13
customization improvements for existing attempts
10:15
at automation that are, it feels
10:17
likely that these things are,
10:19
you know, going to be completely
10:21
boring wrote and normalized, you
10:23
know, inside of a very short
10:25
period of time. Galileo called
10:28
these the digital assembly line in
10:30
KPMG's taco framework. They called these the
10:32
taskers, right? Think things that are
10:34
very, very specific. I mean,
10:36
how much are, how much are organizations
10:38
getting fired up about these things
10:40
versus they're already in the kind of
10:42
table stakes column? Yeah,
10:44
in most cases I think those
10:46
will be like stuff that the
10:49
employees will individually create for themselves
10:51
and thereby they will not move
10:53
any needle. In some cases though
10:55
you are seeing some business processes
10:57
that even using this method can
10:59
be automated significantly more than what
11:01
they've been doing thus far and
11:03
in those cases you might be
11:05
able to see even a higher
11:07
use cases that are implemented using
11:09
a not very complicated technology. Interesting.
11:12
So basically, there's a risk at
11:14
undervaluing the simplicity just because it
11:16
is simple in terms of its
11:19
potential business impact. Exactly. Yeah.
11:22
Let's talk about deep research encoding for
11:24
a second. Because for my money, I
11:27
think that these might
11:29
be the two agentic augmentations,
11:31
however, agent categories,
11:34
that to me, it is
11:37
very hard now to
11:39
justify doing things that you used
11:41
to do without them, without them today.
11:44
I think that the capabilities of
11:46
research tools are, it's very
11:48
hard for me to see how
11:50
people who, I don't do a
11:52
single thing that involves any sort
11:54
of research without using these tools at
11:56
this point, right? And in general,
11:59
my workflow often involves cross -checking two
12:01
to three to four of these to see
12:03
how they come up with things, right? you
12:06
know, three different versions of
12:08
deep research running at any given
12:10
time. Now, obviously... We're just scratching
12:12
the surface of deep research possibilities because
12:14
the versions that we're using are
12:16
sort of very, you know, individually designed
12:18
agents that don't have access to
12:20
proprietary knowledge bases or, you know, and
12:22
obviously what enterprises are thinking about
12:24
is how to plug those into, to
12:26
other data sources. But it feels
12:28
completely like not, not in six months, not in
12:30
12 months right now. If you are, are
12:32
doing anything that involves sort of research or strategy
12:34
and not using those tools, I tend to
12:36
think you're behind. And I think that it were
12:38
pretty, pretty much in a similar spot when.
12:40
comes to coding. Now, coding is
12:42
interesting because there
12:45
is I think an ironic or
12:47
surprising at least amount of
12:49
intransigence when it comes to adoption
12:51
of the sort of coding
12:53
agent tools and vibe coding tools
12:55
and things like that among
12:57
enterprise developers. And to some
12:59
extent there are pieces of it that
13:01
are understandable, right? Like the first
13:03
generation of these tools that are becoming
13:06
popular, right? The IDEs, the cursor
13:08
and windsurf, the specific text code tools
13:10
like Bolt and lovable, they're absolutely
13:12
optimized right now for an individual's sort
13:14
of developer experience as opposed to integrating
13:16
and interacting with these massive legacy
13:18
code bases that have thousands of people
13:20
working on them and a guy
13:22
who might be typing on them the
13:24
next day someone different is using
13:26
that code. But it still feels basically
13:28
criminal at this point to not
13:30
be taking advantage of these sort of
13:32
new efficiencies of these coding tools.
13:34
I mean, we at Super have, we
13:36
had to let developers go who
13:38
wouldn't get with a picture basically to
13:40
change their processes around them. Is
13:43
there at what point do organizations
13:46
just start to mandate that these
13:48
are now the way that you
13:50
do things is, you know, if
13:52
you are not agentically augmented in
13:54
these categories, you're just you're just
13:56
out. You're too far behind. Yeah,
13:59
you know, you're very savvy in
14:01
that. But when we talk to
14:03
organizations, there are so much behind
14:05
you. And a lot of that
14:07
comes from what you're saying that it's very
14:09
optimized for the individual user. But another
14:11
part of it, which is probably the lowest
14:13
hanging fruit, they just don't know the
14:16
breadth of possibilities that these tools can take
14:18
you. Because deep research, for example, people
14:20
get stuck on the deep research part and
14:22
forget that it's just like a deep
14:24
reasoning agent that could literally get you any
14:26
complex task done much better. than any
14:28
other AI tool. So they
14:31
understand that. Similarly with coding, like when
14:33
I'm talking to engineers of... on
14:35
the stack, low level, hardware, high
14:37
level, in many cases, they just didn't
14:39
spend enough time to understand all the
14:41
use cases that they can do with
14:44
them, and that's why they're not using.
14:46
So I agree with you that they
14:48
should, but they need to spend some
14:50
more time to get comfortable with the
14:52
existing tools, and then they can augment
14:54
them and modify them to be like
14:56
enterprise ready for their needs. Today's
15:00
episode is brought to you by Plump. If
15:02
you're building agentic workflows for clients or colleagues,
15:04
it's time to take another look at Plum. Plum
15:07
is where AI experts create, deploy, manage,
15:09
and monetize complex automations. With features
15:11
like one -click updates that reach all
15:13
your subscribers, user -level variables for personalization,
15:15
and the ability to protect your prompts
15:17
and workflow IP, it's the best
15:19
place to grow your AI automation practice.
15:21
Serve twice the clients in half the
15:23
time with Plum. Sign up today
15:26
at useplum.com, that's U -S -E -P -L -U -M
15:28
-B dot com forward slash N -L -W.
15:31
Today's episode is brought to you by
15:33
Vanta. Vanta is a trust
15:35
management platform that helps businesses automate security
15:37
and compliance, enabling them to demonstrate
15:39
strong security practices and scale. In
15:42
today's business landscape, businesses can't just
15:44
claim security, they have to
15:46
prove it. Achieving compliance with a
15:48
framework like SOC2, ISO 2701,
15:50
HIPAA, GDPR, and more is how
15:53
businesses can demonstrate strong security
15:55
practices. And we see how much this
15:57
matters every time we connect enterprises with
15:59
agent services providers at Superintelligent. Many of
16:01
these compliance frameworks are simply
16:03
not negotiable for enterprises. The
16:05
problem is that navigating security and compliance
16:07
is time consuming and complicated. It
16:10
can take months of work and use up valuable time
16:12
and resources. Venta makes it easy
16:14
and faster by automating compliance across 35 plus
16:16
frameworks. It gets you audit ready in weeks
16:18
instead of months and saves you up to
16:20
85 % of associated costs. In fact,
16:22
a recent IDC white paper found
16:24
that Vanta customers achieved $535 ,000 per
16:26
year in benefits, and the platform pays
16:28
for itself in just three months. The
16:31
proof is in the numbers. More than 10
16:33
,000 global companies trust Vanta, including
16:35
Atlassian, Quora, and more. For a
16:37
limited time, listeners get $1 ,000
16:39
off at Vanta.com slash NLW. That's
16:42
V -A -N -T -A dot com
16:44
slash NLW for $1 ,000 off.
16:48
Hey listeners, want a supercharger business
16:50
with AI? In In
16:52
our fast-paced world, having a
16:54
solid AI plan can make
16:56
all the difference. Enabling organizations
16:58
to create new value, grow,
17:00
and stay ahead of the
17:03
competition is what it's all
17:05
about. KPMG is here to help
17:07
you create an AI strategy that really
17:09
works. Don't wait, now's the time
17:12
to get ahead. Check out real
17:14
stories from KPMG of how AI
17:16
is driving success with its clients
17:18
at KPMG. US. US slash AI.
17:20
Again, back to the show. I
17:23
do think that you're right that so with deep
17:25
research, the terminology, the name
17:27
actually, even though it's. sort of
17:29
been universally adopted, right? It's called
17:31
deep research for both open AI
17:33
and Gemini. It's called deep search
17:35
for GROC. It potentially distracts people
17:37
a little bit from the full
17:39
set of possible use cases by
17:41
being called research, right? I also
17:43
think that by virtue of being
17:45
embedded in these other tools, as
17:48
opposed to a standalone thing that
17:50
was introduced as a breakout kind
17:52
of standalone thing, it's perhaps being
17:54
underappreciated in terms of just how differentiated
17:56
it is. We at Superintelligent
17:58
have been among the companies, and
18:00
I've talked to lots of people who have had this
18:02
experience, who spent a bunch of time trying to
18:04
build our own kind of system to Wire
18:06
a bunch of things together only to just see
18:08
if we could run it through deep research
18:10
and have it produce way better results And you
18:12
know that we need we need some like,
18:14
you know 90 letter German word for just do
18:16
it with deep research instead of trying to
18:19
build it yourself But I but I think that
18:21
there is still we're just scratching the surface
18:23
of those use cases and it's gonna take some
18:25
amount of time of Diffusion of people sharing
18:27
their specific uses of deep research for it to
18:29
fully embrace, you know on the on the
18:31
coding side I think that you're right. I think
18:33
that it's going to change rapidly. I think
18:35
that, you know, what, what, what I'm seeing is
18:37
you're starting to see more discussion
18:39
even within the enterprise around what things
18:41
these tools can be used for
18:43
right now, right? So, uh, you don't
18:45
want to use it as your,
18:47
as your sort of primary coding environment,
18:49
but you should deploy these things
18:51
for refactoring or whatever it is, right?
18:53
That you started to be more
18:55
discreet about it. You're also seeing just
18:57
an absolute flood of companies race
19:00
in to try to fill the current
19:02
gaps in. capability and new challenges
19:04
that these tools are arising from, right?
19:06
So you're seeing on the consumer side, you're
19:08
seeing companies that are coming in to try
19:10
to make it easier to go from, okay,
19:12
I've got this code base that I don't
19:14
understand. How do I actually, you know, make
19:16
it live on the internet and do things?
19:18
There are companies that are coming in and
19:20
doing that. On the enterprise side, you're absolutely
19:22
seeing companies that are trying to come in
19:24
and start to maximize for those enterprise use
19:26
cases, even though they're more complex. So I
19:28
think that that's going to change. pretty quickly. Anything
19:31
else on those before we move to sort
19:33
of like the big 800 pound gorilla
19:35
in terms of capabilities or things that people
19:37
are doing now with customer support? The
19:40
only thing that we need for Super
19:42
is to have the deep research on
19:44
API. So if someone in the decision
19:46
-making process can create a very good
19:48
API -able deep research, the better. Yeah.
19:51
Dear OpenAI, I know you some of you
19:53
guys are listening. Please let me know when
19:55
the API is coming. All right,
19:57
so let's talk about customer
19:59
employee support as you know that I
20:01
think probably the area that that is most
20:03
discussed when it comes to agents Yeah, so
20:05
we talked about it before that customer
20:07
support is probably the most mature agentic
20:10
use case out there But there is
20:12
an abundance of flavors for a customer
20:14
support right we're talking from all
20:16
the way from a simple,
20:18
very like FAQ kind of
20:20
an agent, all the way
20:22
to the very impressive, completely
20:24
autonomous end -to -end customer support
20:26
agents. We're also talking about
20:28
other flavors of that that
20:30
can be agents that are
20:32
helping to upsell or cross
20:34
-sell your product because they
20:36
identify opportunities. So we're starting
20:38
to see these implementations. We're
20:40
also seeing similar notions in
20:42
internal employee support, whether it's
20:44
IT support, HR support, legal
20:47
support, payroll, basically everything
20:49
that requires someone to answer
20:51
questions in various capacities. These
20:53
are perfect agents. And
20:55
you can even kind of extend those
20:57
to other types of support in the
20:59
broad sense of the term, whether it's
21:01
to help with employee learning and development
21:03
or on board new employees. In
21:05
many cases, these are the most
21:07
prime time ready agents that we have
21:09
out there. And lastly, and you
21:11
will probably claim that that's a category
21:13
on its own is everything related
21:16
to outbound communication using various voice agents
21:18
to create more sales or maybe
21:20
reach out to candidates that we
21:22
want to hire and so on.
21:25
Yeah, I do think that those are, I think
21:27
there's a couple different distinct categories there. I
21:29
think that the part of the
21:31
reason though that you might want
21:33
to connect them is that all of
21:35
these have a common
21:37
thread of talking in air
21:40
quotes to a person. or
21:42
finding out some information about them or from
21:44
them, and then
21:46
integrating that with some pre -existing set
21:48
of information. And
21:50
this is just all the versions of
21:53
that AI is really good at
21:55
right now. And so
21:57
you're seeing this Cambrian
21:59
explosion of basically every type
22:01
of that interaction. that AI can
22:03
do. So let's talk about voice
22:05
agents for a minute. I think
22:07
that part of why voice agents
22:09
are such a hot category is
22:11
that this is a capability that
22:14
is really useful right now. I
22:16
mean, this is something that we
22:18
observed. Like part of where our
22:20
voice agent interviewing came
22:22
from was observing that.
22:25
in other areas voice agents actually were
22:27
doing a pretty good job right so i
22:29
was looking over at the hiring space
22:31
where companies were already deploying voice agents to
22:33
do you know initial screening interviews and
22:36
things like that and they were working pretty
22:38
well. The voice capabilities are good. Advanced
22:40
voice mode had come out, so latency was
22:42
better. All of these things had kind
22:44
of come online as capabilities. And
22:46
we had the thought, well, maybe you could
22:48
basically turn that into a consultant whose job
22:50
is just to sort of ask the right
22:52
questions and grab a bunch of information. And
22:54
it turns out that a hundred other
22:57
startups or a thousand other startups are
22:59
basically going through that same process
23:01
of thinking through every other version of
23:03
asking people questions. So you have
23:05
voice agents for market research. You have
23:07
voice agents. for, you name it,
23:09
right now they're coming out. And so
23:11
I think that it's hard to
23:13
call voice agents aren't so much a
23:15
category as much as sort of
23:17
a common underlying technology capability that begets
23:20
lots of categories. But I think
23:22
that as companies are thinking about where
23:24
they could be getting value from
23:26
agents right now. It is
23:28
not unreasonable to ask what are
23:30
current functions that involve us
23:32
talking to people, you know, literally
23:34
talking to people and would any of them be
23:37
well suited for, you know, one of the copious
23:39
number of voice agents that are out there or,
23:41
you know, rolling our own version of that. And
23:43
there is so much value if
23:45
you combine the voice agent with
23:47
the deep research or deep analysis
23:49
agents, then you get even a
23:51
full blown consultant that is autonomous,
23:54
basically. I mean, that's, we call the
23:56
sort of underlying technology behind, or at least I
23:58
call it, I don't know if you'd call it
24:00
this, but I call the underlying that
24:02
we use for the agent readiness audit
24:04
an agent consultant engine, because that's what
24:06
it feels like, right? It's job is
24:08
to ask the right question, obviously with
24:10
us, you know, helping kind of give
24:12
it some initial ideas about what the
24:14
right questions are. And then, you know,
24:16
do come up with and do analysis
24:18
on the basis of some particular goals
24:21
and particular knowledge, and which, you know,
24:23
starts to look very prox... what consultants
24:25
do. I think it's worth mentioning
24:27
briefly too before we sort of
24:29
broaden out again the sales agent
24:31
use case. This is one
24:33
to me that feels very
24:35
much again like... I don't know
24:37
that I've ever run across, or at
24:39
least in the last six months, run
24:42
across any sort of sales organization that
24:44
couldn't take advantage of the sales agent,
24:46
the sales type, SDR type agents that
24:48
are available right now. Now, that's not
24:50
to say that they are, you can
24:52
just grab one off the shelf and
24:54
it's instantly good to go. There is
24:57
more work than I think people might
24:59
imagine or might want when it comes
25:01
to getting their SDR agents up
25:03
and running. However, Sales
25:06
is an area
25:08
where there is no risk,
25:10
I don't believe whatsoever, of
25:12
there's always more leads. You always
25:14
want more potentials. If a sales agent,
25:16
like a human sales agent or
25:19
sales representative had access to an agent
25:21
that could get them 10 ,000 times
25:23
the number of leads, they would
25:25
be nothing but thrilled because ultimately, more
25:27
and more is the goal. And
25:29
so I think that one, from
25:31
an internal change management
25:33
perspective. Sales is a really good area where
25:35
it seems highly unlikely to me that
25:38
we're going to see big cuts in the
25:40
sales organization because of agents. We're going
25:42
to be straight, not in efficiency AI, but
25:44
opportunity AI, where it's just how much
25:46
more can we do? How much faster can
25:48
we grow? How much bigger can we
25:50
get? And I think that that's going to
25:52
be a very useful bridge as, or
25:54
employees try to figure out what management's goals
25:56
are as it relates to agents. But
25:58
two, they're also kind of an area
26:00
where we're starting to get a little bit of
26:02
a preview. of the future. you know, Lindy's
26:05
swarms came out recently. And
26:07
swarms are basically agents that beget other
26:09
agents, at least in Lindy's case, where
26:11
they start to do parallel processing, right?
26:13
So instead of it being an agent
26:15
that's doing, you know, data enrichment
26:17
around a particular lead at a time,
26:19
it's, you know, an agent that's fragmented itself
26:21
into a hundred different agents that's doing
26:23
data enrichment across a whole set of lookalikes
26:26
all at the same time. And it's
26:28
all just efficiency. It's how much more can
26:30
it get done in a given
26:32
period of time? And Again, organizations are
26:34
just starting to put these systems online. I
26:36
think that there's still a lot of work, a
26:38
lot of customization that's necessary. But
26:40
I do think that to the extent that, again,
26:42
companies are looking for a place where they
26:44
can dive in right now, get
26:46
their hands dirty, and probably
26:49
get some pretty clear ROI
26:51
from agents. Sales and SDR
26:53
type agents are a pretty good place to look. Yeah,
26:56
I agree and also probably
26:58
out like personalized outreach marketing
27:00
is the same methodology. Yep.
27:02
Yeah, I think it's sort of
27:04
the same bucket. I will say
27:06
that I I am less convinced
27:08
around marketing content in general. I
27:10
still think that there is a
27:12
fairly meaningful gap in quality between sort
27:14
of like copy that's going to come from
27:16
agents right now and copy that comes
27:19
from people. Not because agents are bad at
27:21
writing or anything, but this is just
27:23
it's an area that involves so much taste
27:25
and so much agency and so much.
27:27
you know, knowledge and experience that you don't
27:29
even realize that you have, but you
27:31
happen to notice that, you know, people responded
27:33
to this one word in a tweet
27:35
one time in a way that made you
27:37
never want to use that word again.
27:39
You know, whatever. It's the closest where there's
27:41
actually still a big meaningful gap. I think
27:44
that that'll change over time. I again, think
27:46
that swarms are going to be part of
27:48
the answer where we run lots and lots
27:50
of scenario planning, maybe testing. Yeah, exactly. War
27:52
game type campaigns with marketing. But let's talk
27:54
then about let's zoom back out for just
27:56
a minute. as we close out here and
27:58
talk about, given what
28:01
organizations aren't doing, what
28:03
the challenges are, and then what they
28:05
are doing, what should orgs do? What
28:07
should they be thinking about right now?
28:09
What constitutes a reasonable, a realistic, a
28:12
successful, agentic approach in this particular moment
28:14
based on where we actually are? All
28:16
right. So first, let's make sure
28:18
that they overall improve their agent
28:21
readiness across the board. We talked
28:23
about a little bit when we
28:25
talked about the seven mistakes, but
28:27
get your docs in order, your
28:29
culture, your strategy, have the skills
28:31
in place, have your tech stack
28:33
ready, have your agent infrastructure ready. That's
28:36
like the first and foremost thing
28:38
that you should do. Anything to add
28:40
there? I agree. I think
28:42
that there's a temptation when it comes
28:44
to agents. to think that
28:46
what we should be doing, the entirety of what
28:48
we should be doing is picking an agent and
28:50
deploying it. And I think that that's a big
28:52
part of it, but there is so much infrastructure,
28:54
new capabilities, new thinking that needs to be done
28:56
around it. And a lot of that work actually
28:58
can be done more successfully even than some of
29:00
these deployments in the here and now. Yeah.
29:03
And then the second part and the
29:05
most important at least in the
29:07
context of this conversation is to really
29:09
have a prioritized list of egenic
29:11
use cases. And you should probably have
29:14
a list that is much more
29:16
comprehensive than what you can probably physically
29:18
do or from a feasibility perspective
29:20
do right now. But if you start
29:22
having this list and then you
29:24
go at it one by one, you
29:26
will probably be much better off
29:28
than just selecting one kind of opportunistic
29:30
one and going from there. I
29:33
will encourage you is to have
29:35
this list very much prioritized, not just
29:37
by value, because in many cases,
29:39
like we talked before, the highest value
29:41
agents are the ones that you
29:43
will probably not want to tackle at
29:45
the beginning, but some kind of
29:47
a weighted prioritization between the value, the
29:49
feasibility, and basically the cost. And
29:52
then once you have this prioritized
29:54
list, you can go and start executing
29:56
them. And we can help you
29:58
figure out some of the indications for
30:00
what might be a good use. So
30:04
first of all, in terms of a good
30:06
use case, a good place to
30:08
start is to see what others
30:10
in a similar industry are doing. I
30:13
want to provide one caveat here, like
30:15
we talked before, that sometimes that can create
30:17
a bias of you overdoing again and
30:19
again what others have already done. And in
30:21
many cases, that's not the only thing
30:23
that you can do. The other
30:25
thing that you can try and look for
30:27
are use cases in your business that might
30:29
be a good candidate for agents. and I
30:31
have like a few pointers to provide
30:34
you in order to identify a good
30:36
use case for agents. Some of them
30:38
are more trivial like first and above
30:40
all, it should be something that is
30:42
being already done on
30:44
a computer, digital use cases
30:46
is good for agent. Second,
30:49
think about use cases where you need
30:51
a lot of specialization, but the humans
30:53
are a bottleneck. So your agent's worms
30:55
are a good example. Ideally,
30:57
you would have so many sales persons
30:59
that you will outreach to anyone and
31:01
everyone and follow every lead, but
31:03
you just don't have enough sales
31:05
people. So that's an example of
31:07
a specialization. Another example, very classical
31:10
one, is legal stuff, an
31:12
agent that goes over contracts. Often you
31:14
just don't have access to enough
31:16
lawyers, or it's very expensive. So that
31:18
will be a good example of
31:20
a use case that require personalization. Other
31:22
places that are very good candidates
31:24
are cases where you need a
31:26
lot of availability 24 -7, so of
31:28
course customer support, but that also
31:31
covers all the employee support or
31:33
think of any cases where perhaps
31:35
you want to do better by
31:37
your customers, but you currently can't,
31:39
either because you're a small company
31:41
or just don't have these services
31:43
and consider agentifying them. Another
31:45
thing that I can offer is
31:47
cases where you need to have a
31:49
lot of personalization. And you don't
31:51
have the demand power to personalize, so
31:53
marketing we can debate, but there
31:55
are other cases where you
31:57
want to handle each and every
31:59
individual separately. And then
32:01
a few additional things that I
32:03
can think about is what about cases
32:05
where the more data you have,
32:07
the better the agent will behave. These
32:10
are often places where when I am
32:12
sending you to create your infrastructure, these are
32:14
the places where you should consider. So
32:16
these are also relevant places.
32:19
Another thing that I'm often asking
32:21
our customers and you always say,
32:24
don't just ask about that, but
32:26
that's important is cases where people
32:28
are disliking what they do for work
32:30
because it's repetitive, tedious, and
32:32
they would have liked to do other stuff. And
32:35
last two that I always kind of look
32:37
for in a good agent use case is
32:39
where the process is well defined. The business
32:41
process is very clear. There is a very
32:43
clear set of policies by which decisions should
32:45
be made. And lastly, and
32:47
connected to that is when we can
32:49
measure the output of agents. And that's
32:51
why perhaps coding is such a good
32:53
use case for that, because we can
32:55
measure whether the code is functional or
32:57
not. But think of other
33:00
places. If you're able to differentiate between a
33:02
good and bad outcome, that might be a
33:04
very good use case for agent. What
33:06
type of sort of next steps should
33:08
people be thinking in terms of should they
33:10
be, you know, getting together committees to
33:12
start making decisions differently? Should they just be
33:15
throwing themselves into a first test case?
33:17
How much should they be focused on, you
33:19
know, the infrastructure build out versus just
33:21
actually getting their feet wet with agents right
33:23
now based on what's available? So
33:25
it's going to be, and it depends kind
33:27
of an answer, if they had all the
33:30
resources in the world, so all of them
33:32
create a list of use cases, invest in
33:34
infrastructure, and simultaneously start working on pilots of
33:36
agents, because that's the best way to learn
33:38
and augment both the infrastructure and the use
33:40
cases. If they don't have
33:42
all the resources in the world, I
33:44
would probably be very opportunistic and
33:46
say deploy your first agent and learn
33:48
from that. That's my take.
33:51
Do you believe something else? I
33:53
know I'm with you. I think that
33:55
there is no substitute for the hands -on
33:57
learning that comes by actually getting in
33:59
there and understanding capabilities. I
34:01
also think that it naturally is
34:03
going to be get You're
34:05
going to figure out all the things that
34:07
you don't have in place as you go
34:09
try to actually deploy an agent, right? Your
34:11
if your data is not ready, you're going
34:13
to have to deal with that to get
34:15
that agent ready. If you start to run
34:17
into issues of decision making, maybe that's that's
34:19
sort of what prompts you to think about
34:21
kind of guardrails and, you know, governance more
34:23
broadly. So I kind of think that by
34:25
going after actually doing the thing, right, starting
34:27
by starting, you're likely to have all the
34:29
other pieces come along with it. Yeah,
34:32
I agree. and end up overhauling your
34:34
entire tech stack. Yeah, exactly.
34:36
And get ready for a lot of change
34:38
in a very short period of time. Nufal,
34:41
always awesome to have you on the show. Thank
34:43
you for this. We'll come back and do another, you
34:45
know, what people are doing with agents in six
34:47
months. I expect it'll be very, very different than what
34:49
we're talking about today.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More