Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Okay, this year has been called the
0:02
year of AI agents. Now there's a
0:04
ton of buzz about agents on YouTube,
0:06
on X, all of these different platforms,
0:08
and actually a lot of it is
0:10
wrong. A lot of these agents are
0:12
not very good yet. We want to
0:14
give you the actual real download of
0:16
what is happening with agents and we're
0:18
doing that with one of the best
0:20
brains in the business, Joe Mora, the
0:22
CEO and founder of Crew, AI. We're
0:24
actually going to break down what is
0:26
an agent. We're going to tell you
0:28
where you can get started to build
0:30
agents for your role today and where
0:32
they'll have impact. We're going to give
0:34
you real use cases, things that agents
0:36
are actually doing and make an impact
0:39
for businesses today in marketing and sales
0:41
in other places. And stay tuned because
0:43
we're going to tell you your job
0:45
in the future is going to depend
0:47
on the quality of the agents you
0:50
have built to help you do your
0:52
role. All that and more on this
0:54
episode of Market Against the Grain. I'm
0:56
your co-host as always Kieran Flanagan. Here
0:58
as always for my co-host Kip Bodner.
1:00
Let's get into the day show. Here's
1:03
a quick message from Hubspot.
1:05
This isn't your typical marketing
1:07
software at. Because Hubspot isn't
1:09
typical. It's marketing made easy.
1:11
Easy. Easy. Turn one piece of content
1:13
into assets for every channel. convert
1:16
leads in no time and get
1:18
a crystal clear view of your
1:20
campaign performance. Hubspot can do all
1:22
that and get your results fast,
1:24
like double your leads in 12 months
1:27
fast. See, I told you
1:29
this wasn't a typical software
1:31
ad. Visit hubspot.com/marketers to get
1:34
started for free. Okay,
1:47
we're here with Joe Mora the CEO founder
1:49
of crew AI crew AI is your
1:51
global control pain for agents One of
1:53
the best minds on everything there is
1:55
around agents Joe very happy that you're
1:57
joining the show Hey there, thank you
1:59
so much for having you. So Joe,
2:01
this is like the year of agents,
2:03
right? This is all we've heard. I
2:05
actually have been paying attention to a
2:07
lot of the things happened this year.
2:09
I think it was World Economic Forum
2:11
and there was another big meetup of
2:13
all the AI minds. And I think
2:15
the number one thing on everyone's lips
2:17
is like. agents agents agents agents agents
2:19
right that's most of what we are
2:21
all talking about and so I thought
2:23
we could just tee up for our
2:25
audience maybe we'll actually start with what
2:27
is an agent maybe you can explain
2:29
to our audience like what do we
2:31
mean by agent for one of the
2:33
best minds here to explain agents and
2:35
we can actually get into your take
2:37
on Is it really the year of
2:39
agents? And what do we even mean
2:41
by that? Like where do you really
2:43
see agents be an impactful this year?
2:45
So maybe start with like what is
2:47
an agent? Like maybe explain to our
2:50
audience how you think of it. Well,
2:52
first of all, thank you so much
2:54
for having me. I'm so excited that
2:56
we get to talk about this and
2:58
yes, everyone was to talk agents and
3:00
I gotta say that that has been
3:02
an interesting year so far. But that's
3:04
a great way to start, right. So
3:06
the way that we think about it
3:08
is everyone knows about all these LLLM,
3:10
so Judge EPT and Tropic and everything.
3:12
So they're very good at kind of
3:14
like predicting content, right? So if you
3:16
really say like, hey, write me an
3:18
email, it will do it for you.
3:20
And if you say, well, make it
3:22
funnier, it will do that for you.
3:24
Now, the interesting thing is it almost
3:26
has some sort of cognition, right? If
3:28
you give them two options of emails
3:30
of emails, it's gonna like choose between
3:32
the two and give you're reason for
3:34
that. So the beauty of agents is
3:36
you can exploit that feature to how
3:38
this LLLM's kind of navigate a problem
3:40
on their own. So it's not a
3:42
chat anymore. You give it a task
3:44
and you can leave the room and
3:46
then this agent's gonna try to autonomously
3:48
kind of like through this idea of
3:50
reasoning figure out how to get there.
3:52
So I would say that the definition
3:54
of an agent is you've got to
3:56
have agency. Yeah, and it's got to
3:58
have like some components, right? Like it
4:00
has tools, it has memory, like maybe
4:02
just talk about some of the common
4:04
characteristics of what an agent will have.
4:06
Yeah, because if you think about the
4:08
LLLM on the silo, it just speed
4:10
test out, right? But in order for
4:12
you to make this more agentic, you
4:14
need to have a way to hold
4:16
that information. So you're going to need
4:18
like some sort of memory. And then
4:20
there are going to people that are
4:22
going to ask about shorter memory and
4:24
long-term memory. dive into that and things
4:26
get very technical. But there's memories and
4:28
there's also tools. I would say those
4:30
are the two big components. So as
4:32
the agents are trying to do something
4:34
they're going to use those tools to
4:36
interact with other systems. ERP or CRM,
4:38
whatever that might be. Right. Really when
4:40
you think about an agent, most of
4:42
them will have memory because you need
4:44
that. to have, as you said, agenda
4:46
behavior, most of them will have access
4:48
to tools because they can actually do
4:50
things on your behalf. They're autonomous. And
4:53
so as we sit here today, one
4:55
of the things that Kip and I
4:57
were Jammonon earlier on, and when I
4:59
say Jammonon, I mean, Kip showing me
5:01
because I'm a European and I live
5:03
in the dark ages and we are
5:05
not allowed to have access to anything
5:07
without the bureaucrats signing it off for
5:09
us. And so I do not have
5:11
access to... ChatGBT operator today, other than
5:13
probably through a VPN at the weekend.
5:15
But it's a good segue into like
5:17
the year of the agents and how
5:19
you think about that. So Open AI
5:21
launched that. I think that Kip, your
5:23
take was, it's cool for geeks like
5:25
us. Why don't you give us your
5:27
take, Kip, I don't know where's in
5:29
your mind, and then I would love
5:31
to get Joe, your take on their
5:33
launch and just where we are in
5:35
agents in general in terms of capabilities
5:37
capabilities. In terms of capabilities. as part
5:39
of their chat gPT pro edition, which
5:41
is the $200 month edition. So I
5:43
immediately had to upgrade from $20 a
5:45
month to $200 a month. They took
5:47
your money. They took my money like
5:49
quick. I want to know one pro
5:51
and everything too. So I got to
5:53
do that upgrade. So there's already a
5:55
cost barrier. And what it does. is
5:57
it has basically a browser control agent
5:59
where opening eyes built their own browser
6:01
and the AI can go and navigate
6:03
based on a request. So you could
6:05
make hotel reservations, flight reservations, dinner reservations,
6:07
do research and it's kind of slow.
6:09
It's kind of clunky. If it's a
6:11
good background task and it doesn't require
6:13
me to like log in or do
6:15
a bunch of interaction, it's just like,
6:17
hey, can you look up a hotel
6:19
with a pool in the pool in
6:21
the city? That's great it can do
6:23
that real quick and give me all
6:25
the information real slow it doesn't do
6:27
real quick it doesn't much lower than
6:29
like I know it's like 10 times
6:31
slower than a human doing it right
6:33
yes you could have went to upwork
6:35
hired a freelancer to be your personal
6:37
searcher and had them search for it
6:39
in the time that open AIs to
6:41
operate or actually completed the task look
6:43
it is very slow right now I
6:45
think it's a peek at what's to
6:47
come right I'm interested Joe and your
6:49
take but it's a look at the
6:51
future is a little ways away Well,
6:53
I'm going to say, I agree with
6:56
you, but I have, I think, would
6:58
be a hot take. Maybe not a
7:00
hot take. I don't know. And that's
7:02
how I started to boot crew, right?
7:04
My first experience with crew was building
7:06
agents that would help me with like
7:08
posting things on socials. I was never
7:10
really good at that. I had all
7:12
these ideas, but from getting that idea
7:14
and making that into a well draft.
7:16
thing that if you feel comfortable putting
7:18
out there in the word like there's
7:20
a huge gap in there right and
7:22
I could do it for sure it's
7:24
just a matter of that you're gonna
7:26
have to sit down you're gonna have
7:28
to spend one hour kind of like
7:30
doing this and if you want to
7:32
do it consistently you're gonna have to
7:34
do it every day and I was
7:36
never able to do it the minute
7:38
that I got agents to help me
7:40
with that I start doing it every
7:42
day because we're right they would take
7:44
way longer but now I could speed
7:46
out my crazy idea and I would
7:48
make a coffee, I would get to
7:50
work something, I would forget that the
7:52
thing was running, and then I would
7:54
go back and it was ready. I
7:56
was like, all right, this is good.
7:58
So I hear you, I think. they're
8:00
not optimal or faster than humans by
8:02
large. There are ways that you can
8:04
get there, specifically if you find doing
8:06
certain models for certain things, but I
8:08
do see a lot of value, especially in kind of
8:10
like this idea of your firing a bunch
8:12
of them in the background, and you're firing
8:15
a bunch of them in the background, and
8:17
you've got to do something else and just
8:19
not think about it. As you said, like,
8:21
I agree. What Kip said as well is
8:23
like, the latency doesn't matter if it's a
8:25
task that it's a task that you just
8:27
want to So here's a good example of
8:29
something I think operator would be really good
8:31
at. Let's say you're moving into a
8:33
new apartment, you have a room,
8:36
and like maybe you've picked out a bed
8:38
or something. You can upload the picture of
8:40
the room, you can upload a picture of
8:42
the room, you can upload a picture of
8:45
the bed, and you can upload a picture
8:47
of the bed, and you can upload a
8:49
picture of the bed, and you can have
8:51
those things actually go together, and it just
8:54
happens in the background, and that stuff that
8:56
you would have Right. The long-term goal to
8:58
have a personal assistant, which is what they're all
9:00
building towards. There's one other thing I want to
9:02
just touch on here because it's important of agents
9:04
in general, Joe, before we kind of get into
9:07
just your take on what you think about.
9:09
the impact agents will make this year.
9:11
And it actually is the UX pattern that
9:13
Open AI had in the operator, because one
9:15
of the big question marks around autonomous agents
9:17
is, what's the right UX patterns so people
9:20
feel comfortable with them? And what I mean
9:22
by that is, you know, these agents are
9:24
autonomous, so when KIP asked it to book
9:26
a table on open table, should it just
9:29
come back and say, it's booked, or should you
9:31
actually be able to like see the agent
9:33
complete the task, so you feel... comfortable with
9:35
the agent doing something on your behalf and
9:37
they went down the path to have a
9:39
UX pattern where you can see the agent
9:41
doing its work and at any point in
9:43
time you can take control away from the
9:45
agent. But what's your general take on autonomous
9:47
agents in terms of how comfortable humans are
9:49
going to be to integrate them into the
9:51
workflow like the right kind of UX pattern?
9:53
Yeah, that's a great question. I actually spent
9:55
some time talking with some argument about it
9:57
and it was pretty good because if you
10:00
it's like AI is moving very fast,
10:02
right? So it's not waiting for its
10:04
native protocols. Because if you think about
10:06
it, it should not even use a
10:08
browser, right? A browser doesn't make sense.
10:10
The concept of, can like the buttons
10:12
and everything, that doesn't make any sense,
10:14
or that doesn't make any sense, or
10:16
even keyboard and mouse, that just increases
10:18
latency and reduce throughput for these models.
10:20
Like, in matter of fact, they should
10:22
not even use language to communicate to
10:24
communicate with language to communicate with what
10:26
is happening there. There's a security aspect
10:28
of it because you can always see
10:30
what is happening right now. I just
10:32
think it's getting better and better. You
10:34
want to make sure that you're able
10:36
to do that. But there's also that
10:38
ability for you to feel reassured about
10:40
it. So I think it's going to
10:42
come to a time where you're going
10:44
to feel good about just... find requests
10:46
into the void and things are going
10:48
to get done for you. But I
10:50
think right now people just wanted to
10:52
feel they are more in control of
10:54
these things, right? And this is true
10:56
not only on the personal level, but
10:58
also on companies. If companies are thinking
11:00
about, and these were seeing first-handed on
11:02
our enterprise deals, like if a company
11:04
was to automate a critical part of
11:06
their process, they want to make sure
11:08
that they can visualize and control this
11:10
and they understand what is happening, and
11:12
they can out it later on. So
11:14
I think it's a temporary... that might
11:16
change in the future, but I just
11:18
don't know yet. Yeah, I was talking
11:20
to someone who had been using very
11:22
early on when agents first come out,
11:24
they are playing with all the technology,
11:26
and they were set up an autonomous
11:28
agent to do some stuff on email,
11:30
and the agent was meant to craft
11:32
emails, put them in the draft, and
11:34
then they would go in and look
11:36
at them and then send them or
11:38
not send them. and they had realized
11:40
that the agent started sending them. They
11:42
were sending some like pretty bizarre emails.
11:44
They were like, yeah, like I feel
11:46
the great UX pattern is I need
11:48
to be able to see what the
11:50
agent's doing. And so maybe that's a
11:52
good segue to like, you know, there's
11:54
a lot of, hey, this is transformational
11:56
this year. Agents are going to be
11:58
part of the workforce that was the
12:00
common talk thread over the past couple
12:02
of weeks from every tech leader there
12:04
is. What do they mean by that?
12:06
at what's happening with agents, what do
12:08
you think they mean by that and
12:10
what are you bullish on and what
12:12
are you not bullish on when it
12:14
comes to agents? Yeah, that's a good
12:16
one. Well, first of all, I think
12:18
I want to show you a visual
12:20
real quick because that will help me
12:22
to make my point and that is
12:24
agents are happening, right? So in here
12:27
what you're seeing is the number of
12:29
cruise executions per month. This was pulled
12:31
a few days ago, so January is
12:33
still going. and up to this point
12:35
has been over 16 million crews. Each
12:37
crew has many agents within it. Some
12:39
have up to 21, that's the highest
12:41
number that I have seen, but you
12:43
can go higher than that. So what
12:45
we're talking here is tens of millions
12:47
of agents being executed every month. And
12:49
the reason why I want to double
12:51
click on that is just that it
12:53
is happening. In my mind, it is
12:55
happening the genie is not getting back
12:57
into the bottom. Now it's a matter
12:59
of how fast it will happen. and
13:01
how good you get in what period
13:03
of time. But I think what a
13:05
lot of what we're seeing on companies
13:07
being bullish about it is because, and
13:09
I'm going to take one step back,
13:11
a lot of people are comparing kind
13:13
of like agents with kind of like
13:15
the internet early days, and I think
13:17
that's an interesting way to correlate the
13:19
both. But if you were online on
13:21
the internet on day zero, you would
13:23
have no upside, no impact on your
13:25
bottom line, because no one was online.
13:27
But what we're seeing with AI is
13:29
companies implementing it in like two quarters
13:31
later They're reporting impacts on their bottom
13:33
line. So you've got some of those
13:35
10 K's and 10 Q's and you
13:37
see companies like Walmart Like starting to
13:39
save millions of support You know like
13:41
all right, something is happening here. So
13:43
I think there's a mandate on the
13:45
executive level on this company is on
13:47
the board level of this company is
13:49
that this is happening three years from
13:51
now we need to figure it out
13:53
how we're going to manage deploy these
13:55
resources and from the edge side again
13:57
everyone's tinkering with it and decided so
13:59
it's almost like a claw motion where
14:01
like incentives are aligning and I think
14:03
that is just fooling this up. So
14:05
I'm bullish that this is the year
14:07
of agents and what I mean by
14:09
that is people are going to deploy
14:11
a lot of agents this year. They're
14:13
going to try a lot of things
14:15
this year. Now is this the year
14:17
where companies are going to automate entire
14:19
departments? I don't think just yet, but
14:21
I think this year is where things
14:23
starts to get very pretty seriously.
14:26
Could you maybe tell us? how companies that
14:28
are deploying these crews and agents
14:30
are doing it in the right
14:32
way. So to your point, the
14:34
wrong way is probably I'm going
14:36
to go in and create agents
14:38
to replicate what these humans are
14:40
doing, versus a lot of the
14:43
success I have seen in agents
14:45
is like, they complete micro tasks. that
14:47
make up part of your role freeing
14:49
you up to actually spend time on
14:51
things that are much more important right
14:54
so that actually the human can become
14:56
much better at that role but could
14:58
you maybe talk to us like where
15:00
do you see companies deploying them in
15:02
the right way and like what examples
15:04
kind of use cases are you seeing that
15:06
work really well? Yeah so I actually
15:09
spent a good time talking with Jacob
15:11
Wilson he's the commercial CTO at WW
15:13
and Gen AI. And it's amazing to
15:15
work with them. They're using crew a
15:17
lot. And one big thing that we
15:19
talked about, and there's a whole interview,
15:21
I can send a link over to
15:23
you if people want to watch, is
15:25
there is a cultural aspect on adopting
15:28
agents in the company, right? People like,
15:30
people fear that, or people trying to
15:32
understand what role do they play. The
15:34
companies they're being most successful, and PWC
15:36
is one of them. What they are
15:38
doing is they're kind of promoting people,
15:40
right. So, you're still accountable for the
15:42
end of Brazil, you're still accountable
15:44
for reviewing this, but a nice
15:46
ball on it, present this, but
15:49
now you have this extra tool
15:51
and these agents can do something for
15:53
you. So, a cool example that I
15:55
can mention is we're working with a
15:57
telecom company on a legal use case.
16:00
like they have their legal like
16:02
people that can do everything but
16:04
what is happening now is they're
16:06
ultimately a lot of the contract
16:08
analysis with these agents beforehand. So
16:10
by the time that it gets
16:12
to legal, it already has recommendations,
16:14
red lines, a bunch of other
16:16
things. So kind of like basically
16:18
scaling things that they couldn't do
16:20
before. We thought it being prohibitive
16:22
expensive. But there's so many more
16:24
use cases. And we can talk
16:26
about the sales and the marketing
16:28
use cases and the back office
16:30
automation. There's quite a lot going
16:32
on out there. we've talked a
16:34
lot about kind of where agents
16:36
are and now I was trying
16:38
to understand like what are the
16:40
core use cases that people are
16:42
actually building? What should people go
16:44
and do? There are a lot
16:46
of people who watch our show
16:48
who are like, hey I just
16:51
want to like understand what I
16:53
should be doing, how I go
16:55
out and build an MVP of
16:57
that to see if it actually
16:59
makes sense for me, my business,
17:01
what have you? Like what's happening
17:03
and where should people start? These
17:05
are the most common horizontals within
17:07
a company that we're seeing agents
17:09
and use cases being deployed. Now,
17:11
there's a few interesting things that
17:13
you can infer from this. One
17:15
is there is no clear winner,
17:17
right? There's no like, oh, people
17:19
are using for marketing alone. No,
17:21
it's very much spread out. What
17:23
for companies like us is good
17:25
news, because it means that you
17:27
can land and expand into other
17:29
areas, right? But if you interview
17:31
these people as we did and
17:33
we talk closer with them, the
17:35
common pattern is actually starting with
17:37
simpler use cases. It's what we
17:40
call low precision versus high precision.
17:42
So low precision use cases, they
17:44
require, let's say, 9% certainty or
17:46
accuracy on their outputs, but high
17:48
precision use case required 99.99. So
17:50
an example of a low precision
17:52
could be, well, I want agents
17:54
that will help me draft presentations
17:56
for sales calls out of cross-transcripts
17:58
or my CRM information. and high
18:00
precision use cases that we are
18:02
seeing out there is helping companies
18:04
and banks fuel IRS forms, where
18:06
like you don't want to get
18:08
that wrong, right? Yeah, you can't
18:10
mess up your tax forms. It's
18:12
like you're talking about a big
18:14
corporation. Just an anecdote, funny enough.
18:16
Some of these forms, they are
18:18
70 plus pages long, but they
18:20
call me for instruction manual. That
18:22
is 620 pages long. So yeah,
18:24
agents can help with that, but
18:26
that's more high precision use cases,
18:29
right? You're gonna, those start there,
18:31
you might get a burn. Start
18:33
with low precision and scale from
18:35
there. That's really great advice, I
18:37
just want to kind of recap
18:39
that for the audience. So if
18:41
you were like in a role
18:43
and you wanted to even start
18:45
with agents before you've even started
18:47
to get into how to build
18:49
them and we can get into
18:51
that and show you some use
18:53
cases, what your recommendation is is
18:55
like look. And AI can help
18:57
with this because I've actually used
18:59
it as a test for this.
19:01
If you actually take a role,
19:03
let's say your rule is a
19:05
BDR and you split out your
19:07
tasks into low precision, high precision,
19:09
and you can actually start to
19:11
pick some ones that are in
19:13
that low precision category, and that's
19:15
like some places to experiment with
19:18
agents. Is that like the right
19:20
way for someone to get started?
19:22
Yeah, the way that we put
19:24
it is basically you have four
19:26
bullet points. What is... Be another
19:28
adopter, you want to get ahead.
19:30
Two is don't wait for other
19:32
people use cases, right? A lot
19:34
of people want to know what
19:36
are like what is company acts
19:38
doing? Like you don't want to
19:40
go there. So start simple. That's
19:42
basically what you're saying. Start with
19:44
something simple. And when you expand,
19:46
you want to expand into kind
19:48
of like the low risk, high
19:50
impact kind of like use cases
19:52
and go in that direction. But
19:54
yeah, exactly as you said. And
19:56
then in order for you to
19:58
view them. there's so many different
20:00
ways right now, right? Like there's,
20:02
and if anything, there's gonna be
20:04
even more. There's, for less technical
20:07
people, there's. like no code platforms.
20:09
And we offer that as well,
20:11
for example, Korea AI, but there's
20:13
many others. Like if you're a
20:15
more technical person, you can use
20:17
frameworks like Korea itself, where you
20:19
can actually code in Python, some
20:21
of this agents, and some of
20:23
this prompt. So I think there's
20:25
a bunch of different flavors that
20:27
people can use. Now, a lot
20:29
of the true value gets unlocked.
20:31
When you get more technical people
20:33
involved, I would say you got
20:35
that right. And that is it
20:37
starts with kind of like low
20:39
precision use cases and simple and
20:41
then expand into low risk high
20:43
impact. Right. Let me tell you
20:45
about a great podcast. It's called
20:47
Creators of Brands. It's hosted by
20:49
Tom Boyd. It's brought to you
20:51
by the Hubspot podcast network. Creators
20:53
are brands, explores how storytellers are
20:56
building brands online. From the mindsets
20:58
to the tactics to the business
21:00
side, they break down what's working
21:02
so you can apply that to
21:04
your own goals. Tom just did
21:06
a great episode about social media
21:08
growth called 3K to 45K on
21:10
Instagram in one year, selling digital
21:12
products and quitting his job to
21:14
go full-time creator with Gannon Mayer.
21:16
Listen to creators or brands wherever
21:18
you get your podcast. So maybe
21:20
we want to get in to like
21:22
show some of these use cases, but
21:24
I do think an important point to
21:27
make is, okay, I've decided that I
21:29
understand what an agent is, I've looked
21:31
at my role and I understand like
21:33
what are good use cases for an
21:35
agent. The other thing you mentioned is
21:38
like how agents are being adopted within
21:40
the company. like how they're being built
21:42
for different teams. Can you maybe just
21:44
talk a little bit about that trend,
21:46
like how you see companies adopting agents?
21:49
Yeah. And you mentioned it's like still
21:51
quite a technical thing and maybe kind
21:53
of just touch on that. Yeah. for
21:55
sure. I think it's interesting because early
21:57
days what he would have with like
22:00
LLLMs and I think a lot of
22:02
people probably experiences this first-handed is you
22:04
would have on the edges of the
22:06
organization on individual teams people would just
22:08
pick up LLLMs to do work for
22:11
them. So someone that's a little more
22:13
savvy would start to play around with
22:15
chat cheap and they would found this
22:17
amazing prumps that would help them with
22:19
kind of like customer proposals right. And
22:22
then that became a problem because Well,
22:24
this person is putting information that it
22:26
might not be supposed to be there,
22:28
the company doesn't approve, and maybe this
22:30
person unlocked an amazing case, but that
22:33
information now and that knowledge is silo.
22:35
So while we are seeing with agents,
22:37
at least on the enterprise sales motion
22:39
for Korea, it's more of a central
22:41
deployment. So usually under a CIO, a
22:44
CTO, a head of AI, and you
22:46
sell into this department that is going
22:48
to configure everything, and then start to
22:50
enable these departments individually. And the cool
22:52
thing about that is you're going to
22:55
have way more control on what are
22:57
the elements that are being used. Do
22:59
I want to add future short PII
23:01
and personal information? And making sure that
23:03
I'm controlling all these use cases so
23:06
they are reusable. and then even enabling
23:08
people that can code to use the
23:10
platform to build it. So that's kind
23:12
of like what we're seeing in terms
23:14
of the enterprise adoption. And again, funny
23:17
enough, like there's a lot of kind
23:19
of like low code, easy to roll
23:21
out and templates that you can use.
23:23
But once that you want to get
23:25
into those very kind of like chunky
23:28
use cases, right? Because if you forget
23:30
the name AI agents for a second,
23:32
we are talking about AI powered customized
23:34
process because all these companies have no
23:36
equal process. It's when technical people and
23:39
be involved unlocks a lot of value,
23:41
especially on integrations with homegrown systems inside.
23:43
Right. Yeah, like they're much more powerful
23:45
that they're deeply intertwined within your unstructured
23:47
data, and they're deeply intertwined within your
23:50
systems. Yeah, but I gotta say, it's
23:52
still very much early day, so we
23:54
basically pulled around 4,500 people from different
23:56
companies, and only 15% actually have features
23:58
in production right now. If you isolate
24:01
this, only looking at large enterprises, then
24:03
we're talking about 23, so it's a
24:05
higher number. What can the hints that
24:07
enterprises are moving a little faster here?
24:09
But for a lot of people, it's
24:12
in like early days or... They just
24:14
have done a few deep dives. So
24:16
I think this is very interesting to
24:18
watch. What is that Joe? The 15%
24:21
is enterprise companies that have used an
24:23
agent to complete a task? Or what's
24:25
the 15%? No, that a 15% is
24:27
overall companies, not only enterprises, that have
24:29
features with AI agents in production. And
24:32
if you cut off and look at
24:34
only the enterprise, that number bumps into
24:36
23%. Okay. But that is having like
24:38
AI agent powered features in production from
24:40
the companies that we're talking with. Not
24:43
necessarily in their product, it might be
24:45
like internal automations, right? Might be a
24:47
backoffs automation. Okay, so it is internal,
24:49
yeah. I think internal is actually, this
24:51
is one thing that I found very
24:54
funny. I mean, I had this hypothesis
24:56
in 2024 that I would see way
24:58
more SAS companies kind of like adopting
25:00
agents, just kind of like trying to
25:02
reinvent themselves. But funny enough, I'm not
25:05
seeing a lot of that. It's a
25:07
lot of more traditional organizations that are
25:09
kind of like trying to figure out
25:11
how to get more efficient. In the
25:13
SAS companies that we have talked with,
25:16
I get more reluctance from them on
25:18
it. I'm not sure what that it's
25:20
coming from. that have been proving wrong.
25:22
I feel like that number is higher
25:24
than I would have expected. I would
25:27
have expected like single-digit adoption of like
25:29
agents being deployed. It just it shows
25:31
that even though it's early there is
25:33
a lot of just crappy stuff that
25:35
has to get done out in the
25:38
world that agents are good enough to.
25:40
go and solve today, right? Or otherwise
25:42
that number would be much, much lower
25:44
than it is in terms of like
25:46
agents deployed. What I would say though
25:49
is there is one caveat, right? These
25:51
people, they were coming into Korea to
25:53
answer this form, so they are definitely
25:55
more savvy. They're looking into agents, so
25:57
they're definitely more savvy. They're looking into
26:00
agents, so I think if you look
26:02
at the broader population of like any
26:04
company out to like the... Lagarts. We're
26:06
still so early. I think the point
26:08
here is like, if you're listening to
26:11
the podcast and you're following along and
26:13
you're thinking, well, I'm going to create
26:15
an agent and do something, you're in
26:17
the fast movers. Yes. Exactly. I think
26:19
one of the things that would really
26:22
help our audiences showcase some of the
26:24
agents that have been built for real
26:26
use cases and go through like what
26:28
these agents are actually doing. for maybe
26:30
a salesperson or a marketer or something
26:33
that you really think is like a
26:35
good example of an agent in action.
26:37
Yes, so I think on the marketing
26:39
and sale side, one of the most
26:41
interest ones that I have seen is
26:44
agents that are doing a couple facts.
26:46
They start with enrichment. So the way
26:48
that they deploy this was actually an
26:50
in-product feature and a back office, kind
26:52
of like automation. So when someone would
26:55
come into their website and create an
26:57
account, they would have agents go online
26:59
and start researching this person. So up
27:01
to this point, that's okay, right? This
27:03
is kind of like regular enrichment, like
27:06
find what is this person role, find
27:08
like more about this company vertical and
27:10
all that. So that's good. Now, where
27:12
things starts to get interesting is they
27:14
got some agents to get one step
27:17
ahead and given the information that they
27:19
found come up with hypothesis on how
27:21
this person is going to use their
27:23
product. So what is the interest? If
27:25
this person is kind of like a
27:28
CMO at a company, how they are
27:30
going to use like what value they're
27:32
going to get from this product and
27:34
creates kind of like three hypothesis. And
27:36
then do the same thing for the
27:39
company, like for a company in this
27:41
vertical, what do you believe in like
27:43
the main three things that would take.
27:45
So now that the agents have done
27:47
that, they convert it into Jason, so
27:50
structure data, and then push it in
27:52
two places to their hub spot and
27:54
into their product database. So what that
27:56
means is now out there a meal
27:58
marketing. It's like. super hyper target, mentioning
28:01
not only the name of the person,
28:03
the company, but like highlighting the features
28:05
in the ways that they could leverage.
28:07
And then in the product, and this
28:09
kind of like what they're working on,
28:12
in the product, they preview some of
28:14
the information and the templates that they
28:16
show, like basically using that inference that
28:18
they made. So very interesting use case
28:20
case, and I think a great kind
28:23
of use case for agent in general.
28:25
Yeah, it's somewhat similar to what we
28:27
do. And I think the example going
28:29
back all the way to what an
28:31
agent is and having memory tools, like
28:34
an example of a tool that is
28:36
really good for research is something like
28:38
perplexity's new sonar, like perplexity is a
28:40
pretty good research tool that the agent
28:42
can have access to and do some
28:45
of that research on your behalf. So
28:47
I think that's a good example of
28:49
like the agent being able to autonomously
28:51
complete those tasks without any real human
28:53
in the loop and you as a
28:56
sales rep have that stuff. at hand
28:58
and so you can be much more
29:00
productive because you have this stuff being
29:02
done for you by a small team
29:04
of agents who each are like specialized
29:07
in one of those tasks. Exactly and
29:09
I think we go back to the
29:11
comparison that we make with the operator
29:13
early on right like yes you as
29:15
a sales rep you could go out
29:18
and research this customer and make sure
29:20
that you're engaging them like on a
29:22
very custom format or like prepare for
29:24
every meeting that you have yes you
29:26
absolutely you absolutely you can. You might
29:29
be able to do it faster than
29:31
what an agent would do, but if
29:33
you can do that at scale, that
29:35
means that you can take more meetings
29:37
and be better prepared to them. And
29:40
there's also something beauty about you being
29:42
able to customize it, right? Let's say
29:44
that you always have that when say
29:46
it was rap in your company, then
29:48
it's just like the beast, right? The
29:51
guy does the best rap. Why don't
29:53
you just like try to replicate that
29:55
for everyone and now everyone has that
29:57
level of notes? I mean, the sales
29:59
rap plan would be too happy about
30:02
it, but it might be a definitely
30:04
an interesting experiment. I would love to
30:06
get into another use case you mentioned
30:08
off, Mike, but I think one of
30:10
the interesting things here, because it was
30:13
a point you made which is... the
30:15
rep can have these agents and be
30:17
much better at their role and other
30:19
reps might not be happy about it.
30:22
I could create some competition to actually
30:24
use agents because you can't compete unless
30:26
you have access to the same sort
30:28
of help. And I don't know if
30:30
you saw there was an interview from
30:33
Satya Nadala, the Microsoft CEO this week,
30:35
and one of the things he said
30:37
was in the future, people might be
30:39
hired. because of the agents that they
30:41
have helping them do their role. And
30:44
so he can imagine a world where
30:46
you go to LinkedIn and instead of
30:48
having certifications or anything like that, and
30:50
even prior work experience, you actually list
30:52
at the agents that you have built
30:55
to help you be like the kind
30:57
of employee that you are. Let's just
30:59
get your kind of thoughts on that
31:01
as a future version of what like
31:03
a great employee might look like. I
31:06
a thousand percent agree. Honestly, not because
31:08
of the agent themselves, but also because
31:10
why do I tell about that person?
31:12
And I can tell that we are
31:14
doing this at Korea. So for example,
31:17
when we interview people for engineering roles,
31:19
we tell them during the interview, you
31:21
are allowed to use anything. Like you
31:23
can use chat cheapity, you can use
31:25
entropic, you can use coarser, you can
31:28
search Google, whatever it is, you're allowed
31:30
to use. If they don't use it,
31:32
It's an automatic pass. And how well
31:34
they use it actually comes a lot
31:36
on if we're going to make them
31:39
an offer or not. And that can
31:41
be counterintuitive for a lot of people
31:43
like, well, but are you assessing like
31:45
their engineering skills? Well, on the day
31:47
to day, they're going to have access
31:50
to these tools anyway. So I want
31:52
to know how well they use it.
31:54
And I would say like probably what
31:56
he's hinting there is some of that
31:58
as well. the agent that they have
32:01
as their companion, but also the
32:03
ability they were able to create something like
32:05
that and might tell about that person.
32:07
Yeah. Coming back to the use cases you
32:09
mentioned off, Mike, I think one of the
32:12
ones we should actually just cover real quick
32:14
because everyone loves a good SEO content use
32:16
case, could you maybe just go through that
32:18
use case to just give another example
32:20
of what agents can help with? Yes.
32:22
I love that one. That was so
32:25
good. So this was a stockup that
32:27
we were helping on using agents. It
32:29
was very early days in crew. It
32:31
was very interesting. And what they wanted
32:33
to do is everyone likes to do
32:36
not only SEO, but overall conversion
32:38
rates, right? And AB testing is such
32:40
a big thing. And everyone knows that
32:42
it works, but takes a lot of
32:44
work to do it well. So what
32:46
we were building together was agents that
32:49
would like give an ER. product goes
32:51
into our website, would take screenshots
32:53
of our website, understand like the
32:56
copies, everything. Then when the research, what
32:58
are their competitors, would go into their
33:00
websites and look at all their copies
33:02
and everything, and then we'd create hypothesis,
33:05
right? Why should we change on your
33:07
website, given what the agents are
33:09
able to build and understanding around
33:11
the industry and the competitors, in
33:14
order to get you a better conversion? And
33:16
then the idea is that they would go
33:18
all the way to implementing those AB tasks
33:20
and measure it so that you can then
33:22
choose. So it's basically automating the whole
33:25
kind of like AB testing kind of
33:27
like process from SEO to copy and
33:29
everything, but going this one step beyond
33:32
kind of like understanding the industry and
33:34
the competitors and everything. And again, something
33:36
that you could do yourself, but that
33:39
would take quite a lot of time.
33:41
So it's basically. giving you recommendations on what
33:43
AB test to run by ingesting that data
33:45
and actually doing the AB test themselves. It's
33:47
actually going ahead and just so you just
33:49
basically come in and updating the code. Yeah.
33:52
Yeah. Explain to what the agent does. Do
33:54
you have to start off by giving it
33:56
data? Like what's the human doing and what's
33:58
the agent doing? So the input. there was
34:00
this is my website this is my description
34:02
of my industry this is what I'm trying
34:04
to optimize this is the human input okay
34:07
and then the agents would go around and
34:09
do everything behind the scenes and they would
34:11
come back with all this like AB hypothesis
34:13
for testing that's pretty awesome and then what
34:16
the company that we were working with was
34:18
actually doing is doing a product around this
34:20
right so you would be able to see
34:22
how the hypothesis and you would say yes
34:24
let's do this one this one this one
34:27
and then they would run for a few
34:29
days and you could implement it but that
34:31
was like there was not the agents anymore
34:33
that was the proper product that they find
34:35
you in the agent or use rag or
34:38
anything to build the hypothesis like a you
34:40
know or was it just the public information
34:42
the the agents knows everything about the internet
34:44
public information yeah okay yeah I think there
34:47
was something about they're doing with the images
34:49
that was kind like more proprietary to them
34:51
like the parsing the images where they're doing
34:53
something fancy with there's not only this not
34:55
only the screenshot but there's also the HDML
34:58
and then they would kind of like create
35:00
a better understanding of like the web page
35:02
by doing that but no that was basically
35:04
a lot of kind of like the open
35:06
models that you have now. Very cool this
35:09
has been a great conversation I wonder if
35:11
where we should end is okay we started
35:13
with you know explaining the agent how to
35:15
get started How they're getting adopted, but really
35:18
like framing it all and like we do
35:20
believe this is years agent You have some
35:22
like really great stats to show that this
35:24
is really happening, right? This is not all
35:26
hype, but what do you think is hype
35:29
in terms of what you hear in how
35:31
agents are spoken about? What is the mismatch
35:33
you see today and where the technology is
35:35
today and where expectations may be for how
35:37
people want to use agents like where do
35:40
you see the biggest mismatch when you speak
35:42
to people? believe especially this year. I don't
35:44
think it's going to be kind of like
35:46
once and done. Like we're seeing this with
35:48
operator, right? And like now there's a bunch
35:51
of tweets where like people said it's do
35:53
things and kind of like fails me right
35:55
through and you got to step in and
35:57
take over and all that. So I think
36:00
this is not the years where we're going
36:02
to have like completely end-to-end, especially on the
36:04
more high precision kind of processes, kind of
36:06
like, oh, agents are doing everything. I think
36:08
that will be kind of like a step-by-step.
36:11
The other thing is implementing on especially the
36:13
more complex use case is going to be
36:15
a way like a... bigger left than a
36:17
lot of people would live. I think there's
36:19
a lot of glue in these companies nowadays
36:22
and in the soft person and how they
36:24
connect to each other that in order for
36:26
agents you can like be able to navigate
36:28
those paths. You're going to need to have
36:31
like clear code and clear instructions on how
36:33
to do it. And I think like there's
36:35
a reason why everyone is doing the browser
36:37
set of things first, right? That is easier.
36:39
Like there's a common interface. But there's a
36:42
lot of companies out there. They have software.
36:44
There's not even online. It's desktop apps. How
36:46
you handle that. So I think there's going
36:48
to be a lot of more challenges in
36:50
there. So I think this year is going
36:53
to be definitely where we're going to see
36:55
a lot of agents going into production. But
36:57
it's very much early days still. This is
36:59
not the years like where agents are taking
37:02
over the like a workforce. That's not it.
37:04
Right. create and hype still those YouTubeers as
37:06
we realize as YouTubeers ourselves. Everything has to
37:08
be like a dramatic headline. That's the way
37:10
it works. Joe, I think this was an
37:13
incredible run through of agents in a way
37:15
that will make it really easy for people
37:17
to actually understand what's the reality is and
37:19
where they can go get started and obviously
37:21
we would highly recommend they go to a
37:24
platform like crew and build some agents and
37:26
play with the technology. And you have a
37:28
bunch of like cool templates actually that make
37:30
it super easy to start to get inspiration.
37:33
One of the things that people actually struggle
37:35
with, which is why I wanted to really
37:37
dig into like how you would suggest someone
37:39
starts with a use case, is people just
37:41
struggle with like where do I even get
37:44
started? I figure template gallery is a really
37:46
great way to find inspiration for your role.
37:48
So I appreciate you coming on and taking
37:50
us through that. The explanation of
37:52
of No worries. Thank
37:55
you so much for
37:57
having me. I had
37:59
a blast. Thank you
38:01
so much much catch I catch
38:03
my Thank you, Joe. you,
38:06
Joe. your help. your help.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More