Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Welcome to Heavy Networking the
0:02
flagship podcast from the Packet Pushers.
0:04
I am your co-host Ethan Banks,
0:06
along with Drew Conray Murray. You
0:09
can follow us on LinkedIn, Blue
0:11
Sky, and the Packet Pushers Community
0:13
Slack Group, and please do. On
0:16
today's episode, Artificial Intelligence, with sponsor,
0:18
Selector, dot, AI. Now if you're curious and
0:20
maybe you're still skeptical about the value
0:22
AI is bringing to network operations, this
0:25
is an episode you should listen to.
0:27
Selector is on the forefront of AI
0:29
ops for networking, building models that are
0:31
customized and specifically targeted at networks.
0:34
What Selector is doing, it's not
0:36
simply the low-hanging AI fruit of throwing
0:38
documentation into a model. So you can
0:40
ask natural language questions and get a
0:42
procedure back. Selector is going hard into
0:45
the data your network generates, finding connections,
0:47
and surfacing information that makes you better
0:49
at your job. John Capo Bianco is
0:51
our guest today. John's been on heavy
0:54
networking before and I... I think of
0:56
John as a bit of a mad
0:58
scientist with network automation and lately AI
1:00
writing code building tools and thinking hard
1:03
about how to get useful information out
1:05
of the network in novel ways. And
1:07
John recently joined Selector because he saw
1:09
their approach to AI that of a
1:11
network language model as a thing he
1:13
wanted to be part of. So John,
1:15
welcome. Well, welcome back to heavy networking
1:18
and man, it's good to see your
1:20
smile and face again. And it's good.
1:22
Could you jump into this idea of
1:24
Selector's network language model? Because it's the
1:26
it's the distinguishing thing you guys told
1:28
us about in the prep call for
1:30
the show. So okay, what is it?
1:32
Is it like a large language model
1:34
or is it something else? Yeah, I
1:37
know that's great. And you've teed everything
1:39
up so perfectly Ethan and I'm I'm
1:41
so glad to be back on a return
1:43
visit to packet pushers on this real journey
1:45
from networks to network to
1:47
network automation to now AI. So
1:49
at the heart of it. there
1:51
is a large language model. You've
1:53
asked, is it a large language
1:56
model? Except it's been fine-tuned. Okay, so
1:58
think of this as muting. an existing
2:00
model. In our case, it's the
2:02
Lama 3 model, moving up to
2:05
Lama 3132, keeping up with that
2:07
model's evolution. This is meta's open
2:09
source model, and there are techniques
2:11
known as fine tuning that let
2:14
you, let's call it inject domain
2:16
specific knowledge into the large language
2:18
model. Now that's a key point
2:20
because we can take. All that
2:23
telemetry all that S&P all that
2:25
SIS log all of those alerts
2:27
API Jason Unstructured data structure data
2:29
all that massive amounts of data
2:32
A. I is really really good
2:34
at working with that large amount
2:36
of data from a machine learning
2:38
perspective to normalize it and to
2:41
correlate it and to graph it
2:43
and and present it in a
2:45
way that we can interact with
2:48
it standardized But also we can
2:50
take that data and use it
2:52
as the source data to fine-tune
2:54
an existing large language model. Now
2:57
I call this a network language
2:59
model because it truly has been
3:01
trained on your network data and
3:03
the machine learning outcomes. And actually,
3:06
Selector has an inner layer of
3:08
fine-tuning, let's say, as in Selector,
3:10
the select statement from sequel. That's
3:12
where the name of the organization
3:15
comes from. So we actually do
3:17
a layer of fine tuning that
3:19
changes sequel into natural language. So
3:21
a user can just ask, how
3:24
is the Wi-Fi in New York?
3:26
And that actually turns into a
3:28
fairly complex sequel query under the
3:30
hood that then goes against that
3:33
fine tuned network language model that's
3:35
specific to your organization. So what
3:37
I think is kind of neat
3:39
is that. we can actually combine
3:42
fine tuning and what selectors doing
3:44
is an approach called raft, retrieval
3:46
augmented fine tuning. So you've heard
3:48
about rag and retrieval augmented generation.
3:51
That technique in itself is incredible,
3:53
but you can compound that and
3:55
use the vectors and use the
3:57
true source of truth as a
4:00
way to provide data sets that
4:02
then can fine tune the llama
4:04
three model. Okay, so you said
4:06
a lot and I want to
4:09
try to parse some of this
4:11
out. So we're starting with a
4:13
large language model. You said llama,
4:15
I think 3.1, 3.2, and you'll
4:18
increment as that increments. And that's
4:20
to give this what you're doing,
4:22
sort of a foundation of English
4:24
language context to have that natural
4:27
language capability, yes? Exactly. So the
4:29
core root of this, think of
4:31
it as concentric rings, if you
4:34
will. So the core of it
4:36
is going to be the Lama
4:38
3, large language model, billions of
4:40
parameters trained on natural human English
4:43
and human text. Okay, and then
4:45
we're adding to that, so domain
4:47
specific information, which is network data.
4:49
So you mentioned S&MP, you mentioned
4:52
APIs, you mentioned CIS log, structured,
4:54
non-structured data, that's also being fed
4:56
into the model? That's correct. So
4:58
we can take all of that
5:01
data. that we collect from the
5:03
network and from different points and
5:05
endpoints and even applications and use
5:07
it as a source so the
5:10
retrieval augmented generation think of that
5:12
storing all that data in a
5:14
vector store and then being able
5:16
to retrieve against that data we
5:19
can then use that to generate
5:21
the data sets to fine tune
5:23
the model. So we're going to
5:25
add an outer band around the
5:28
natural language model. of domain-specific enterprise
5:30
network and system information. So in
5:32
its normal training, an LLLM like
5:34
mom... is already going to know
5:37
quite a bit about networks, right?
5:39
Without any fine tuning, they know
5:41
incredible amounts, subnetting, BGP, OSPF, port
5:43
numbers, access control list, different vendors'
5:46
syntax even. And that's because just
5:48
the general model has scooped up
5:50
that stuff as it sort of
5:52
swims through the internet like a
5:55
whale ingesting krill? That's right. It
5:57
already has a certain level, but...
5:59
we want to fine-tune it with
6:01
the domain-specific information. I think a
6:04
good analogy is an open book
6:06
versus a closed book exam. And
6:08
LLLM is going to train itself
6:11
like a brain on billions and
6:13
billions of internet text and other
6:15
sources, but it has a knowledge
6:17
cutoff date, and it only has,
6:20
it only knows what it's been
6:22
trained on. Well, with... with retrieve
6:24
logmented generation and with fine tuning,
6:26
you can sort of turn it
6:29
into an open book exam. So
6:31
when I ask how is Wi-Fi
6:33
in New York, how is an
6:35
LLLM going to answer that without
6:38
fine, without fine tuning, without domain
6:40
specific information? It's going to hallucinate.
6:42
It might make a guess, it
6:44
might be right, it might be
6:47
wildly incoherent, but if we can
6:49
fine tune the model, with information
6:51
about Wi-Fi in New York, now
6:53
the LLLM has an open book
6:56
external ability to reach into the
6:58
data to augment the generation of
7:00
the output, hence retrieval augmented generation,
7:02
right? So is there kind of,
7:05
since it has to do this
7:07
fine tuning meaning, it has to
7:09
go and find new information, is
7:11
there... a lag or delay in
7:14
the information I might get back
7:16
from this model when I ask
7:18
how is the Wi-Fi in New
7:20
York? Well, it can be, it's
7:23
continuously fine tuning itself and the
7:25
user gets to select the increments
7:27
of time. So yes, there could
7:29
be some lag. You know, we
7:32
recommend anywhere. between a five and
7:34
a 15 minute training cycle. You
7:36
don't want to be train, you
7:38
know, you need it to finish
7:41
its completion before you start the
7:43
next training cycle. So there is
7:45
some consideration there, but it is
7:48
as close to real time as
7:50
the training allows it, right? Okay,
7:52
so but we're not talking like
7:54
a 24 48 hour delay. It's
7:57
with a reasonable time frame. Within
7:59
a reasonable window of I would
8:01
say five to 15 minutes. John,
8:03
everything you're describing sounds like it's
8:06
here today and it's real. So
8:08
just to be perfectly clear, this
8:10
is all active technology. I can
8:12
buy this today and it works.
8:15
Yeah, that's correct. We have many
8:17
large ISPs. We have companies in
8:19
industry, manufacturing, excuse me, media, entertainment,
8:21
health care. It's it's actually starting
8:24
to take hold and the demand
8:26
is quite incredible. I was just
8:28
that ITC in Switzerland and The
8:30
most difficult part of my role
8:33
is really convincing people that this
8:35
is an actual thing that exists
8:37
right now that you could put
8:39
into your network and start interfacing
8:42
with natural language. I would say
8:44
bi-directionally, meaning you can ask the
8:46
questions in natural language, but it's
8:48
also going to drive smart alerts
8:51
to you in natural language. Nothing
8:53
cryptic about this, no templates to
8:55
make. It can take that alert.
8:57
and correlate it to a root
9:00
cause and send you a single
9:02
alert instead of the 500 messages
9:04
of everything we used to get,
9:06
right? So that is really what
9:09
drew me to this was this
9:11
idea of a network language model
9:13
that has a natural language interface
9:15
and all of the dramatic and
9:18
drastic implications that that has. It's
9:20
democratic. I've seen people from a
9:22
CX level, CXO level, use it.
9:25
all the way down to CCIs,
9:27
all the way to junior engineers.
9:29
It really is a sliding scale.
9:31
because that's what the natural language
9:34
models have been trained on. It
9:36
knows how to adjust its syntax,
9:38
so to speak, and it's how
9:40
it explains things, the level of
9:43
depth, the focus of the conversation.
9:45
It really is a democratic democratizing
9:47
force. So easy, even your CTO
9:49
could use it. Anybody, right? Nice,
9:52
true. So John does that slot
9:54
this into the constellation of networking
9:56
tools that we have out there.
9:58
Is this a troubleshooting tool? Is
10:01
it like a network management system?
10:03
You said a lot of things
10:05
that make me think it's kind
10:07
of all of those things. Yeah,
10:10
what I think is really exciting
10:12
is that we we're positioning ourselves
10:14
as an augmentation. And I feel
10:16
that there's a little bit of
10:19
concern about AI being a displacer
10:21
and going to, you know, eliminate
10:23
jobs. I have a much more
10:25
utopian outcome that AI is going
10:28
to augment our jobs and augment
10:30
us as human beings, just like
10:32
every other tool we've had since
10:34
the lever in the Stone Age,
10:37
right? So what I think is
10:39
neat is that we can augment
10:41
your solar winds, your splunk, your
10:43
existing stack. We're not saying we're
10:46
here to gonna, we're gonna save
10:48
you all these tools and we're
10:50
gonna replace all these things. Maybe
10:52
over time. there might be some
10:55
natural decay and natural opportunities to
10:57
sunset older technologies as people move
10:59
to our platform. But yes, it's,
11:02
it's monitoring, it's observability, but with
11:04
that augmentation of machine learning and
11:06
artificial intelligence. So I've even seen,
11:08
you know, a user wants to
11:11
know why their application feels slow,
11:13
right, which involves getting into things
11:15
like memory in the host, right?
11:17
Right. So, as opposed to troubleshooting
11:20
the network for hours, it actually
11:22
will surface that the host is
11:24
low on memory and to go
11:26
into this node and this stack
11:29
and. up the memory and your
11:31
latency issue will go away. It
11:33
has the insights that normally would
11:35
take, you know, this whole idea
11:38
of a war room or a
11:40
digital war rooms after the pandemic
11:42
where you're in your instant messenger
11:44
tool of choice with 15 other
11:47
subject matter experts trying to pinpoint
11:49
some issue. This has the ability,
11:51
this being machine learning and artificial
11:53
intelligence. to take silo data from
11:56
across the stack, from the bottom
11:58
of the network, all the way
12:00
to the application layer, and draw
12:02
connections and make correlations, because if
12:05
it's natural ability to cluster things.
12:07
So that's really what we do
12:09
as humans, right? As we try
12:11
to take in as much data
12:14
as we can and eliminate, sort
12:16
it into what's good and what's
12:18
bad, and try to focus more
12:20
on what's bad and throw away
12:23
what's good. to hopefully then make
12:25
some connections between the CIS log
12:27
alert I got and the 200
12:29
calls the help desk is receiving,
12:32
right? Like that is what it
12:34
actually is extremely capable of doing
12:36
beyond any individual or even team
12:39
of people's capability in this day
12:41
and age, where there's six or
12:43
seven devices per user, each running
12:45
a hundred different apps. right in
12:48
different clouds in different data centers
12:50
hosted by different technologies right. I'll
12:52
give you an example that I
12:54
think I've given before on this
12:57
podcast but I distinctly remember a
12:59
problem we were facing as a
13:01
financial payments gateway where sporadically payments
13:03
would not process for a certain.
13:06
high profile customer and it ended
13:08
up being kind of escalated escalated
13:10
escalated too it's like okay now
13:12
there's six or eight or ten
13:15
of us on a call banging
13:17
our heads together trying to figure
13:19
out what the heck is going
13:21
on because everything it's like it
13:24
must be the network it must
13:26
be the load balter must be
13:28
the firewall, it must be this,
13:30
it must be that. Long story
13:33
short, it was maximum connections were
13:35
not set high enough on the
13:37
server. Occasionally when that customer would
13:39
hit us, because they'd hit us
13:42
with a lot of volume, we'd
13:44
max out the number of connections.
13:46
Anything exceeding that connection count would
13:48
get propped. The server couldn't handle
13:51
them. It was a simple problem
13:53
to solve once we knew that,
13:55
oh, we just got to pick
13:57
up, add more connections. And off
14:00
we went. You've just nailed it
14:02
and that's that is I think
14:04
every one of us has a
14:06
story like what you're saying and
14:09
I think people listening to this
14:11
are going to connect this to
14:13
their. Oh my goodness it was
14:16
just this little thing but I
14:18
would argue that in our field
14:20
the 97 maybe 98% of the
14:22
effort is in. identifying the root
14:25
cause. It's not in fixing the
14:27
root cause. Like you said, what
14:29
did you have to do? We
14:31
up the connections that took a
14:34
second for someone to toggle the
14:36
nerd knob, right? Yeah. Or some
14:38
optic fails while I replace the
14:40
optic, or I need to call
14:43
my ISP for something, or I
14:45
need to roll back a configuration
14:47
that someone made. Like it's very
14:49
obvious what the fix is once
14:52
you've identified that root cause. And
14:54
I think that is really the
14:56
game changer of this idea of
14:58
a network language model that can
15:01
basically cluster things and then correlate
15:03
the root cause. Right. So if
15:05
an optic goes down, you're going
15:07
to get likely BGP errors, likely
15:10
OSPF errors or ISIS errors, likely
15:12
some interface errors, but you're going
15:14
to get hit with it all
15:16
at once. Right. The AI is
15:19
actually going to say here. This
15:21
optic failed and it actually caused
15:23
these 50 sub-events. You don't have
15:25
to worry about those. You need
15:28
to replace the optic. Well, that
15:30
just cut down potentially hours and
15:32
hours and millions and millions of
15:34
dollars if that's an internet, you
15:37
know, if that's a... service provider
15:39
facing link that you're troubleshooting on
15:41
end, right? So what I think
15:43
is need is that it's actually
15:46
here for infrastructure around the same
15:48
time as it arriving for application
15:50
developers, right? Things like agile, things
15:53
like Devops, they started in the
15:55
early 2000s. And we're still talking
15:57
about getting network automation to be
15:59
adopted in net dev ops and
16:02
infrastructure as code. It's taken a
16:04
long time for infrastructure to catch
16:06
up to that wave that made
16:08
software development so successful. Well now
16:11
we actually have a co-pilot for
16:13
your network, much like application developers,
16:15
have a co-pilot in their VS
16:17
code to help them write code.
16:20
Right. It's actually here for infrastructure
16:22
to troubleshoot to verify config configs.
16:24
to identify root cause, to remediate
16:26
root cause, some of the digital
16:29
twin predictive nature, that object is
16:31
going to overheat in 18 days
16:33
based on the ramp up we
16:35
see and the correlations that we
16:38
can make, right? So we talked
16:40
a little bit about the construction
16:42
of what Selectorai is doing. There's
16:44
the llama model, there's network data,
16:47
which is coming from the customer,
16:49
and we'll get into that in
16:51
a minute. You also mentioned... There's
16:53
a sequel component and I'm curious
16:56
if you can drill into that
16:58
a little bit more why sequel
17:00
and what it's doing under the
17:02
hood. So that is where the
17:05
founders really thought early before when
17:07
they were still a juniper this
17:09
idea of bringing network to sequel.
17:11
I myself did a whole bunch
17:14
of network as search with elastic.
17:16
So I think it's a good
17:18
fit if you can normalize that
17:20
data. So with the advancements in
17:23
machine learning. we can actually take
17:25
heterogeneous data and get it actually
17:27
into sequel structure. But then the
17:30
problem becomes now we need to
17:32
train network engineers on. on SQL
17:34
queries and make them all DBAs.
17:36
That's a big, that's the challenge.
17:39
It's not necessarily building good sequel
17:41
databases with network state or config.
17:43
It's how you bring up network
17:45
engineers to now be database administrators,
17:48
right? Normalize all of the different
17:50
kinds of network data and put
17:52
it in a structure. SQL provided
17:54
that structure, but instead of forcing
17:57
folks to become sequel query experts,
17:59
you sort of hide the sequel
18:01
element under natural language. That's correct.
18:03
So the first band of fine
18:06
tuning that we do to the
18:08
llama model is is the sequel
18:10
to natural language fine tuning. Okay.
18:12
To teach llama all of the
18:15
different possible sequel queries that could
18:17
be used against our data store
18:19
and and how to actually interface
18:21
them with natural language. So that
18:24
is the first set of fine
18:26
tuning on this kind of concentric
18:28
rings. and then the outer ring
18:30
is the network data that we
18:33
then fine tune on top of
18:35
it. So the network language model
18:37
has the ability to do natural
18:39
language to sequel queries, but it
18:42
also has the ability to provide
18:44
network state as a retrieval augmented
18:46
generation source. All right so that
18:48
that outer layer that conceptual outer
18:51
ring of the model you've described
18:53
as telemetry it's network telemetry is
18:55
being fed to the model. So
18:57
is that my telemetry alone as
19:00
a as a selector customer or
19:02
is that my telemetry and everybody
19:04
else is telemetry to kind of
19:06
augmenting the model? That's a great
19:09
point. So every customer gets the
19:11
first two layers of the fine
19:13
tuning. So every customer is going
19:16
to get the llama three core
19:18
with the sequel. concentric ring trained.
19:20
That outer band is per customer,
19:22
not shared, not stored, we're not
19:25
then fine tuning a master version
19:27
of the language model or something.
19:29
Every individual customer has a unique
19:31
mutated version of their network language
19:34
model based on their telemetry data
19:36
and their telemetry data alone. We
19:38
have the ability to do this
19:40
in the customer cloud on-prem in
19:43
a cloud we have access to.
19:45
There are different deployment options based
19:47
on the use case and scenario
19:49
on scale. You know, it's a
19:52
few nodes and some collectors on,
19:54
you know, in within the organization
19:56
to build and train and tune
19:58
the model, but that is not
20:01
cross-pollinated across enterprises or customers at
20:03
all. No. It's all fenced in
20:05
specific private and secure to the
20:07
customer. Right. I might not want
20:10
the model trained and then my
20:12
customer uses that my competitor, I'm
20:14
sorry, maybe uses that model trained
20:16
on my data. Maybe that doesn't
20:19
feel good. That's a thing. No,
20:21
it's a good thing to bring
20:23
up because this is a new
20:25
frontier and there are people have
20:28
a lot of questions and they're,
20:30
you know, the number one concern
20:32
is that they're going to be
20:34
using chat GPT in some way.
20:37
and that ChatGPT is going to
20:39
be learning from my CIS logs,
20:41
from my IP lists, from my
20:43
proprietary configurations, this is not the
20:46
case. This is the power of
20:48
open source and the power of
20:50
a distributed private model that can
20:53
be run on GPUs in the
20:55
cloud or on-prem, but a customer's
20:57
private version of it, right? And
20:59
then with that data, we as
21:02
selector are not privy to it.
21:04
It really is their network language
21:06
model. That was my next question
21:08
is if I've got data that
21:11
I've got to be very careful
21:13
with, I've got very specific data
21:15
governance that I'm required to adhere
21:17
to, my data doesn't have to
21:20
go to Selector, the company. You
21:22
give me the, again, the guts
21:24
of the model, if you will,
21:26
and then I bring that in-house
21:29
and make that model my own.
21:31
Selector never sees my data if
21:33
I don't want you to. That's
21:35
correct. And you can build up
21:38
your own on-prem. So if it
21:40
is something that needs to be
21:42
air-gapped, right? And you're thinking that
21:44
this won't apply to your enterprise
21:47
because of your constraints or your
21:49
security concerns or policy. Like I
21:51
said, it could be an on-prem
21:53
solution that's air-gapped on your own.
21:56
We provide the nodes and the
21:58
collectors and that model is going
22:00
to be fine-tuned for your access
22:02
and your access only. Although. I
22:05
would say I just freaked out
22:07
for a second because I was
22:09
like, wait a minute, if I
22:11
got to build this thing in
22:14
house, does that mean I need
22:16
to go to invidia and maybe
22:18
get another round of funding for
22:20
my company so I can afford
22:23
the GPUs to make this model
22:25
work? Well on-prem, like I said,
22:27
if you do want to host
22:30
it, you will have to build
22:32
the hardware solutions. And we do
22:34
have the paperwork and the skews
22:36
that show how many nodes per
22:39
scale and what type of hardware
22:41
you need. you know the cloud
22:43
is a consideration but i don't
22:45
have to fill a data center
22:48
with gp u's to pull this
22:50
off do i no no a
22:52
moderate footprint for gp u's right
22:54
no that the the it's it
22:57
would train faster right depending on
22:59
the scale of the g p's
23:01
and the resources that had available
23:03
to it but but no you
23:06
do not need a full you
23:08
know row of racks of g
23:10
p's or a very complicated system
23:12
right again What you're going to
23:15
need is the footprint to do
23:17
inference, because the fine tuning portion
23:19
of it, we're not really doing
23:21
full blown training of a model.
23:24
Well, that's where the larger footprint
23:26
for GPUs on-prem is going to
23:28
happen when customers or enterprises want
23:30
to actually build a model from
23:33
their data. Because we're fine tuning
23:35
in existing model, the footprint is
23:37
that much, it's a balancing act,
23:39
right? So we can fine tune
23:42
and create this. new network language
23:44
model from your telemetry data within
23:46
a reasonable amount of time with
23:48
a reasonable amount of heart because
23:51
it's kind of a halfway point.
23:53
We're taking an X billion parameter
23:55
size model that's already been built
23:57
and we're fine tuning it with,
24:00
you know, comparatively smaller amounts of
24:02
data. Millions of parameters that we're
24:04
fine tuning with as opposed to
24:07
billions or even trillions that might
24:09
be for the core model, the
24:11
Lala model. Right, right. Yeah. So
24:13
John just to give people a
24:16
sense of how much of a
24:18
footprint might be required to do
24:20
that fine tuning training with their
24:22
telemetry like you don't have any
24:25
cluster members they need how many
24:27
CPU cores that kind of thing
24:29
GPU cores yeah so selector typically
24:31
runs on a three or five
24:34
member Kubernetes cluster so it is
24:36
Kubernetes that that's why we can
24:38
run it basically anywhere 32 cores
24:40
256 gigs of RAM per node
24:43
if you've got a lot of
24:45
data right 16 cores, 64 gigs
24:47
around on a single node could
24:49
be our proof of concept to
24:52
get started. So, you know, a
24:54
moderate footprint there, it depends on
24:56
the use case. I would work
24:58
backwards from those numbers. You know,
25:01
a nationwide backbone is gonna have
25:03
a lot more data than a
25:05
college campus, but I think that
25:07
you would agree that those are
25:10
pretty modest numbers. Yeah, they. they
25:12
really are actually and kind of
25:14
sets my you know what it
25:16
popped into my head when we
25:19
started talking about you can train
25:21
this in-house was was in video
25:23
with their huge presentation they did
25:25
earlier this year with that massive
25:28
massive array that whacks stacked full
25:30
with all the cables and everything
25:32
and what's itself for it's like
25:34
if you have to ask what
25:37
it costs you can't afford it
25:39
I'm sorry you know that kind
25:41
of thing you also need a
25:44
small modular nuclear nuclear reactor if
25:46
you want to power right Exactly.
25:48
Right. Well, and there's a similar
25:50
video from Tesla where it's just
25:53
rack after rack after rack after
25:55
rack and the rows and rows
25:57
of huge amounts of cables and
25:59
compute, right? Unbelievable, the stuff that's
26:02
coming out. So you mentioned collectors as
26:04
well as part of the solution,
26:06
and I assume those are the
26:08
things gathering my network data. Where
26:10
are those collectors? What are they
26:12
collecting? Are they doing any pre-processing
26:14
before it goes into the training node?
26:16
How does it all work? Well, the
26:18
collectors, so we are, you know, there's
26:20
a small number of distributed collectors.
26:23
There's small nodes that will bring
26:25
in the telemetry and send it
26:27
into the selector platform. It's a
26:30
lightweight footprint. We're not talking
26:32
about agents across all of your
26:34
desktops or anything like that. It's
26:37
a few number of collectors
26:39
internally that will collect the SMP,
26:41
the CIS log, GMMI, other telemetry,
26:43
and normalize it with what
26:46
we call a data hypervisor. So
26:48
similar to a hypervisor that
26:50
kind of abstracts the hardware, we
26:52
are going to take that heterogeneous
26:55
data that we collect. and
26:57
normalize it into Jason. And
26:59
you mentioned S&P, CIS log,
27:01
can you do flows? Can you
27:03
do what? Can you get streaming
27:05
telemetry, say like I'm thinking Arista,
27:08
the way they can stream telemetry
27:10
out of the switch? Yes, we can
27:13
basically work with you to ingest, you
27:15
know, any type of data flow at
27:17
all. It could even be rest API
27:19
data. But yes, we pride
27:21
ourselves with a number of data
27:23
sources that we can ingest. Well,
27:25
John, let's take this conceptual tool
27:28
here. We've been talking kind of
27:30
how it works, what it does
27:32
in a broad sense. Boots on
27:34
the ground. Give me some use
27:36
cases. What can I do with this
27:38
thing? Sure. So I love the
27:40
natural language examples, right? So, you
27:42
know, how is the Wi-Fi in New
27:45
York? One thing that I really
27:47
find appealing about the selector
27:49
dashboard. So we really are going to
27:51
try to take... care of a single pain
27:53
of glass. I know you've heard this before,
27:55
but truly we are a layer above all
27:58
of your other dashboards and all of the
28:00
other points of entry. So we
28:02
have an open canvas and we
28:04
can build dynamic dashboards. So when
28:06
I ask, you know, show me
28:09
the latency on all my WAN
28:11
connections. We will build a graph
28:13
inside of the window that then
28:15
you can populate and start to
28:18
build a dynamic dashboard that then
28:20
you can drill down and explore.
28:22
Conversely, it really is the smart
28:24
alerting. So you're going to integrate
28:27
this with your slack, your teams,
28:29
your zoom, your whatever platform of
28:31
choice. And we're going to drive
28:33
smart alerts that have all of
28:35
the correlations and identify the root
28:38
cause. There's hyperlinks from these messages.
28:40
So it's a bidirectional flow. You
28:42
can simply query, but it really
28:44
is going to do the correlation
28:47
and identify root cause. Now what's
28:49
interesting is we're actually going to
28:51
visualize to prove. the alert, so
28:53
to speak, in case you want
28:56
to drill down and understand the
28:58
event in its significance and in
29:00
its totality. So we do have
29:02
the ability to show you these
29:05
network graphs. So it's a graph
29:07
flow using the retrieval augmented generation
29:09
that will have the root cause
29:11
at the root of the tree,
29:13
and then all of the subsequent
29:16
linked events after that main event
29:18
happened. So it's going to put
29:20
it into chronological order for you,
29:22
what the root cause was. subsequent
29:25
affected services or protocols and visualize
29:27
that for you. So, okay, so
29:29
the classic network alert is up
29:31
down red, green kind of right?
29:33
So it sounds like if I
29:36
get that sort of an alert
29:38
from selector, it's going to be
29:40
this is up or this is
29:42
down with a bunch of metadata
29:45
around it that explains to me,
29:47
what the heck just happened here?
29:49
Well, exactly. And maybe I should
29:51
have mentioned this a lot earlier,
29:53
but. Not only are we going
29:56
to be collecting from collectors, we
29:58
are actually going to actually connect
30:00
into your your CMDB, your netbox,
30:02
your nautobot. your Excel spreadsheet, your
30:04
metadata. So that way it would
30:07
actually link those IP addresses or
30:09
those MAC addresses or whatever BGP
30:11
identifiers to geo locations. So we
30:13
actually do visualization on over top
30:15
of maps over top of the
30:18
planet to show you the connections
30:20
between New York and Denver or
30:22
whatever. And in terms of the
30:24
red green, the machine learning is
30:27
actually very good at spotting trends.
30:29
So We actually have a lot
30:31
less false positives because of the
30:33
machine learning aspect of this. We
30:35
know that there's a spike on
30:38
Monday morning on traffic. We know
30:40
there's a Friday night backup, right?
30:42
The seasonality of the network is
30:44
part of this fine tuning now
30:46
and part of this machine learning.
30:49
So the alerts that you get
30:51
are not typically noise. Here's the
30:53
alert. By the way. because of
30:55
this optic failure you've lost Boston
30:57
New York Denver whatever right so
31:00
real metadata but in a natural
31:02
language that you can continue the
31:04
conversation with I don't want that
31:06
point to be lost it's not
31:09
just a one-way smart alert I
31:11
can then interrogate that alert further
31:13
what time did this happen are
31:15
there any configuration changes near this
31:17
event I can further the conversation
31:20
with this copilot that's it really
31:22
is a network co-pilot that I
31:24
can include. And then from there,
31:26
because it's all in slack, I
31:28
can invite other people into that
31:31
room. We have the AI co-pilot
31:33
as part of the actual conversation
31:35
interrogating and trying to come up
31:37
with a solution. But like we
31:40
said, 97% of the effort usually
31:42
is in identifying the root cause.
31:44
Well, now I've actually been given
31:46
a root cause that is actionable.
31:48
I can just take action. So
31:51
that's what I want to maybe
31:53
stress to the network engineers of
31:55
the world. I've used dozens of
31:57
platforms in my career, right? And
31:59
the promise was always, hopefully the
32:02
alert I get. is something I
32:04
can do something with and take
32:06
action with. Otherwise you end up
32:08
muting it, making mailbox rules, right?
32:10
We just filter out all this
32:13
noise. But I mean something I
32:15
can do something with, right? I've
32:17
built, I've built these systems to
32:19
help with RCA. And usually the
32:22
methodology is you're building a dependency
32:24
tree. That is you are educating
32:26
the platform to understand because you've
32:28
got them in a stack and
32:30
you basically you build out the
32:33
network graph yourself in some some
32:35
way. And so you're teaching the
32:37
thing. It doesn't have any intelligence,
32:39
but it just kind of knows
32:41
the way you've laid things out
32:44
either hierarchically or in a network
32:46
graph layout sort of a way
32:48
that if this goes down, these
32:50
other things are going to be
32:52
affected in some way. But something
32:55
I want to make sure I
32:57
understand here with Selector, the way
32:59
you guys learn the model, I
33:01
don't have to do that sort
33:04
of education or training of the
33:06
model. It's going to figure those
33:08
relationships out itself. Yes, exactly. That's
33:10
why we can do those proof
33:12
of concepts with one node and
33:15
a single use case that a
33:17
customer wants to solve very quickly
33:19
because we're not spending, okay, add
33:21
another rule, add another rule, think
33:23
of another rule, right? And then
33:26
to your point of building it
33:28
manually, it only it holds up
33:30
until there's another use case. Oh,
33:32
I didn't think of this situation
33:34
happening, right? So now I have
33:37
to update that whole graph with
33:39
another scenario and another scenario. This
33:41
is exactly what the machine learning
33:43
algorithms are good at is doing
33:46
all of this for us, feed
33:48
it the data, let it, you
33:50
know, we are, we are, it
33:52
is supervised learning, right? This is
33:54
not unsupervised learning, we at Selector
33:57
have our own recipes involved here
33:59
in that machine learning algorithms and
34:01
in the fine tuning, right? So
34:03
it is supervised in a way,
34:05
but you're right, to your point,
34:08
no, the first couple days of
34:10
our POC are not. whiteboarding all
34:12
the scenarios by hand and then
34:14
mapping. those to some set of
34:16
rules you need to apply to
34:19
a system, right? So when I
34:21
get this up and running, is
34:23
it just kind of a blank
34:25
window and it's waiting for me
34:28
to query it to have it
34:30
do something or is it going
34:32
to do something or is it
34:34
going to start, you know, as
34:36
it gets information and starts figuring
34:39
out my network and how it's
34:41
running, start feeding information up to
34:43
me? I think it's a bit
34:45
of both. It does start as
34:47
a blank window and basically what
34:50
you kind of use a building
34:52
blocks approach. But then correct that
34:54
you're going to need a few
34:56
minutes to ingest from your collections,
34:58
give it five to 10 minutes
35:01
to have enough data, and then,
35:03
and then it's going to start
35:05
normalizing and doing baselining and log
35:07
mining of the CIS logs that
35:10
it's going to start receiving and
35:12
of the telemetry it starts receiving.
35:14
No, and then you're sort of
35:16
off to the races. You have
35:18
the copilot learning from your data.
35:21
spotting the anomalies, identifying root cause
35:23
and, you know, you want to
35:25
connect your metadata, your meta stores
35:27
to this, so you net box
35:29
or whatever, so that way you
35:32
have that enriched data. But I,
35:34
but I also want to put
35:36
data in that is higher up
35:38
in the stack. That is, I
35:40
don't want to operate this purely
35:43
as a network silo. And I'm
35:45
thinking about meantime to innocence. sorts
35:47
of queries that we get in.
35:49
Developers been challenged by the customers
35:52
because of poor performance and they
35:54
go, we didn't do anything, it
35:56
must be the network, you know,
35:58
and they fling it over the
36:00
wall. So to get to MTI,
36:03
you need to have data from
36:05
higher up in the network or.
36:07
server level kind of information and
36:09
stuff coming in from engine X
36:11
or Apache logs and from your
36:14
load balancer and maybe from your
36:16
application and even from your database
36:18
and be able to pull all
36:20
of that data in so that
36:22
you could so that selector could
36:25
then say what root cause is.
36:27
In other words, the model's only
36:29
going to be as good as
36:31
in that outer ring is the,
36:34
right, to mount telemetry and feeding
36:36
it, right? That's absolutely correct. The
36:38
more sources of data, the better
36:40
in our case. And like you
36:42
said, we do want that engine
36:45
X flow, that F5 flow, the
36:47
database flow, the compute flow. We
36:49
want all of it because it
36:51
builds, right, a more total picture.
36:53
in that outer ring for inference
36:56
and for making these network graphs,
36:58
right? So it's not going to
37:00
pollute, right? You suddenly think, well,
37:02
if I'm going to be starting
37:04
to monitor API data, Jason data
37:07
from some API, right, and you
37:09
don't, you might not think it
37:11
has any connection at all to
37:13
your network flows, those natural correlations
37:16
that the AI can make. It
37:18
actually is enhancing the data. So
37:20
the more data that we have,
37:22
the better it is for us.
37:24
Okay, so one more connection to
37:27
make here, that is to the
37:29
world of distributed tracing in massive
37:31
distributed compute models that are running
37:33
across Kuberneti's distributed tracing comes up
37:35
to help developers figure out what's
37:38
going on across the stack. Where
37:40
did this call go? What server
37:42
did it hit? What process answered
37:44
it? What were the... etc. really
37:46
complicated problem and a bunch of
37:49
different companies in startup space have
37:51
tackled this, the distributed tracing problem.
37:53
It feels to me like John
37:55
that the model given enough telemetry
37:58
and time to learn the distributed
38:00
tracing problem, I don't know if
38:02
it goes away because it's not
38:04
exactly solving the same problem set
38:06
here, but very similar where I'm
38:09
going to get to root cause
38:11
analysis. based on information I'm pulling
38:13
in from a whole bunch of
38:15
different systems, and I don't have
38:17
to explain to the model how
38:20
to correlate that data. It's gonna
38:22
figure it out. Yeah, no, I...
38:24
I think you nailed it. I
38:26
don't know. I hate to say
38:28
solve problems, but like you said,
38:31
for distributed tracing, there's still quite
38:33
a bit of human effort involved,
38:35
right? Like that's what we're trying
38:37
to reduce is the human effort
38:40
that's required and use the augmentation
38:42
of artificial intelligence and machine learning
38:44
to handle large, large amounts of
38:46
data, right? To reduce that meantime
38:48
from innocence from hours to minutes
38:51
to minutes. or even seconds in
38:53
some case, right? So I've been
38:55
getting briefings from vendors in the
38:57
network monitoring observability space for decades
38:59
and they all say pretty much
39:02
the same thing and have been
39:04
for decades. Like we collect your
39:06
data, we make a baseline, we
39:08
look for anomalies, we send you
39:10
an alert. What is different here?
39:13
Well, I think it's... living up
39:15
to the promise of 15 or
39:17
20 years of that story, right?
39:19
I think it is actually embracing
39:22
artificial intelligence and machine learning those
39:24
two are something that people haven't
39:26
done in the past because simply
39:28
because it wasn't mature enough it
39:30
wasn't there was no real artificial
39:33
intelligence really it's only practically about
39:35
two years old since you know
39:37
chat GPT launched in November of
39:39
2022 November 30th so What I
39:41
think is different is that the
39:44
amount of data that we can
39:46
actually absorb with artificial intelligence with
39:48
with a modern network language model
39:50
is the equivalent of right it's
39:53
a force multiplier. I don't know
39:55
if it's 25 more people or
39:57
50 more people or like five
39:59
super experts on your team. It
40:01
really is like having a new
40:04
member of your network operations team
40:06
or application or database or whatever
40:08
telemetry you're pointing at it, you
40:10
have this new team member. that
40:12
doesn't get tired, doesn't really make
40:15
mistakes, doesn't need vacations, speaks to
40:17
you in a language you can
40:19
understand. I don't need to learn
40:21
any specific new query language. I
40:23
don't have to learn the database
40:26
of the day's flavor, right? There's
40:28
so much for network engineers to
40:30
have to do, even just in
40:32
the sphere of networking. And then
40:35
if they start to do automation,
40:37
now they can start to use
40:39
artificial intelligence. I think it's just
40:41
a natural progression from. Devops to
40:43
net Devops to AI ops. But
40:46
yes, it is the same story
40:48
that we're going to collect all
40:50
your data, baseline your data, find
40:52
anomalies and give you smart alerts.
40:54
I don't think the story is
40:57
changed. I think the foundation of
40:59
the technology that the story is
41:01
riding on top of has has
41:03
dramatically changed. I think it's a
41:05
we are doing revolutionary things in
41:08
comparison to some of the, let's
41:10
call them legacy at this point,
41:12
operational tools. I guess I was
41:14
trying to find out if there
41:17
was a way you could elucidate
41:19
sort of what machine learning is
41:21
doing differently than sort of the,
41:23
I guess, traditional method of, you
41:25
know, we build the baseline and
41:28
then we alert you about a
41:30
deviation. Right. Well, I think, I
41:32
mean, sometimes problems, I think of
41:34
how many underlying problems there are
41:36
on networks today that are going
41:39
unnoticed. simply because of a lack
41:41
of ability to absorb that much
41:43
data, right? One single router or
41:45
one single switch generates a lot
41:47
of telemetry and a lot of
41:50
data. And I don't know that
41:52
some graph on another glass of
41:54
pain is really helping solve the
41:56
problem. What we are approaching things
41:59
differently in that you can literally
42:01
just have a conversation with the
42:03
network technology. in the form of
42:05
human language I think that's that
42:07
in itself is like the addition
42:10
of the mouse, right? It's a
42:12
new peripheral. It's a whole new
42:14
way to interface with technology as
42:16
a whole, just to be able
42:18
to say, you know, is there
42:21
a problem on my wannily between
42:23
New York and Chicago? And just
42:25
let the machine literally take it
42:27
from there. There's a predictive nature
42:29
to what's going on here. That
42:32
is, you know, anticipatingating the answer
42:34
to a question in a language
42:36
model context. And so... I think
42:38
going back to Drew's question, what's
42:41
different is, is the math involved
42:43
is different. The math that's being
42:45
done to generate the model and
42:47
the data that comes out of
42:49
that model is distinct from, from
42:52
say, statistical analysis. And it's also
42:54
the way the answer is generated
42:56
is distinct from endless if then
42:58
else's stacked as logic and code.
43:00
Would you agree with that, John?
43:03
Am I on the right track?
43:05
Oh, I think you're totally correct.
43:07
The advancements in in. Even just
43:09
the GPU, the power of these
43:11
processors that we have access to,
43:14
it has really opened up an
43:16
entire new realm of the possible,
43:18
right? To predictive nature, digital twins,
43:20
being able to replay network events,
43:23
hopefully someday being able to ask
43:25
it, what would happen if I
43:27
added this route? Right, going beyond
43:29
just a predictive nature of some
43:31
optics can overheat, being able to
43:34
ask questions about impact with proposed
43:36
changes, which were on the path
43:38
towards. I think that the ability
43:40
for us to apply ourselves to
43:42
what matters and what really counts
43:45
is important. I think the more
43:47
of the tedium, the more of
43:49
the repeatable things, the more of
43:51
having to memorize. commands and the
43:53
CLI, like I just think that
43:56
we can do better and we
43:58
are very specialized human beings in
44:00
this networking world. understand these systems
44:02
better than anyone and we've largely
44:05
built them ourselves. It's fun now
44:07
to have someone that we can
44:09
speak to that understands our language
44:11
and understands the language of your
44:13
network that you've built as you've
44:16
built it, right? Like I don't
44:18
want to dismiss this idea that,
44:20
you know, this fine tuning of
44:22
the network model actually creates a
44:24
co-pilot based on your exact topology.
44:27
protocols, IP addresses, geo locations, metadata.
44:29
And it's astounding to just chat
44:31
with something you've built. Okay, you
44:33
mentioned use cases, you mentioned something
44:35
along the lines of use cases
44:38
being kind of open. So is
44:40
that is that the right way
44:42
to think about the selector AI
44:44
model? Is that is it a
44:47
general purpose model or is it
44:49
highly highly trained just for a
44:51
specific use case I come up
44:53
with it in my network. No,
44:55
I think it is, it's somewhere
44:58
in between. I like we're not
45:00
going to let's say rewrite the
45:02
way we tune the model based
45:04
on a use case necessarily. The
45:06
model stands up on its own
45:09
for general purpose use cases. We
45:11
do like to start with the
45:13
use case. So when we interface
45:15
with new customers and when we're
45:17
trying to get to know the
45:20
world. We are really a use
45:22
case driven problem solving approach. So
45:24
let's start with a single use
45:26
case with a proof of concept.
45:29
What is your pain point? What
45:31
are you struggling with? What is
45:33
it that you'd like to solve
45:35
with our platform? And then we
45:37
can work with you to implement
45:40
the solution to hopefully solve that
45:42
single problem. But we're confident in
45:44
that approach because there are going
45:46
to be more problems, right? Everyone
45:48
has more than just one single
45:51
problem. They're trying to solve. But
45:53
if we can help them solve
45:55
that first single problem, I think
45:57
they will see the natural opportunity
45:59
to hopefully solve. many problems. And
46:02
again, it's not just limited to
46:04
some interfaces gone down and you're
46:06
going to get an alert that
46:08
says the interface has gone down,
46:11
right? It's actually going to be
46:13
able to take all of the
46:15
data that it has and all
46:17
of the connections that it made
46:19
and give you the impact assessment
46:22
of that outage, right? It's going
46:24
to be able to make your
46:26
tickets in your ticketing system for
46:28
you. Right. Things that you might
46:30
not think of that are time
46:33
wasters or a tedium, right? Waking
46:35
up to a smart alert and
46:37
the tickets already been created and
46:39
possibly we've already implemented a closed
46:41
loop solution by running an answerable
46:44
playbook or by running some terraform
46:46
code or a Python script. Like
46:48
the possibilities there don't stop with
46:50
just the smart alert. We could
46:53
actually kick off closed loop automation
46:55
to implement remediation remediation steps. that
46:57
the natural network language model will
46:59
help guide you towards, right? Okay,
47:01
so you anticipated my question. We've
47:04
been talking about it can alert
47:06
you to a problem, but it
47:08
sounds like maybe Selectory, I can't
47:10
fix it itself, but if I've
47:12
got automations or processes that can
47:15
be kicked off automatically, I can
47:17
work with Selector to do that
47:19
to kick those. Yeah, exactly, exactly.
47:21
We want to try to close
47:24
the loop. And if you have,
47:26
and it could even, so it
47:28
will suggest. solutions to problems, right?
47:30
The root cause has been identified
47:32
and based on our analysis, here's
47:35
our suggested fix or here's the
47:37
configuration change that someone made that
47:39
introduced this problem. I think those
47:41
are some low-hanging fruit to roll
47:43
back a configuration change with approval
47:46
automatically, right? If I hit a
47:48
threshold and the number of incidents
47:50
that are created from a configuration
47:52
change, roll it back or run
47:54
some... customer written or collaborative answerable
47:57
or pie ETS or whatever it
47:59
is, right? But yes. It doesn't
48:01
stop at the smart alert. We
48:03
can do close loop automation. And
48:06
again, the smart alert is an
48:08
open-ended conversation that doesn't end with
48:10
just the alert. You can start
48:12
to interrogate this, the specific situation.
48:14
That's really interesting. We've talked about
48:17
ingesting data into the model. You're
48:19
talking, John, about how it's pretty
48:21
easy for you guys to pull
48:23
in lots of different sorts of
48:25
data from a lot of different
48:28
sort of sources. Okay. However, even
48:30
with that capability, do I as
48:32
a customer need to have a
48:34
data scientist on staff to make
48:36
the most of this platform? So
48:39
that's great. I'm, I sort of
48:41
teased some of our staff that
48:43
we are data scientists as a
48:45
service, right? So that really is
48:48
part of the engagement with Selector,
48:50
is that, right, it's not a
48:52
professional services angle. We're not going
48:54
to upsell you on professional services.
48:56
We're going to get data scientists
48:59
to connect with you very early
49:01
in the process to help with
49:03
that fine tuning aspect and understand
49:05
the data that's coming in, particularly
49:07
if it's data that is unique
49:10
to the customer or a system
49:12
that only they have that format
49:14
of data in. We will provide
49:16
the data scientists as a service
49:18
more or less throughout the engagement
49:21
and throughout the totality of the,
49:23
you know, of the solution. So
49:25
that's not something that you need
49:27
to have. If you are an
49:30
IT operational shop and you want
49:32
to augment your staff with AI
49:34
and are interested in the solution,
49:36
it's not like you need to
49:38
have any special skills or special
49:41
AI knowledge or machine learning experience
49:43
on staff. No, we'll take care
49:45
of all of that. Well,
49:48
John, I think I'm getting why
49:50
you're enthusiastic about this. I mean,
49:52
I remember before you ever went
49:54
into working in vendor land, you
49:56
were experimenting with all kinds of
49:58
network automation stuff and just making.
50:00
code do things that were interesting
50:03
and you get open sourced a
50:05
bunch of tools and stuff and
50:07
then got into AI on your
50:09
own. You know, and now you're
50:11
at Selector. Your journey's been fun.
50:13
I totally get, now that you've
50:15
told this story about Selector, why
50:17
you're there, it makes sense to
50:20
me. And I, from what you
50:22
guys told us, you've got a
50:24
bunch of customers that are super
50:26
enthusiastic as well. I guess series
50:28
B funding has been or is
50:30
going to be announced very soon.
50:32
Yes, there's a big announcement about
50:34
series B funding. We have a
50:37
tremendous amount of interest, a tremendous
50:39
amount of success, I would say,
50:41
with our existing customers. It's an
50:43
exciting time to get in at
50:45
the ground floor and then see
50:47
the growth, you know, probably 2530
50:49
new employees since I've joined. And
50:51
it's because of demand. And I
50:54
think it really speaks to the
50:56
fact that we're actually solving problems.
50:58
with artificial intelligence. We're actually helping
51:00
organizations as best we can, given
51:02
the fact that it is new,
51:04
and it is a bit of
51:06
a trailblazing approach, right? It's difficult
51:09
to be first in many ways.
51:11
And I think Selector has been
51:13
first in many different ways and
51:15
many different technologies. So the Series
51:17
B is only going to further
51:19
fuel. our growth and our capabilities
51:21
and the things we're going to
51:23
offer in the platform. Another thing
51:26
you mentioned along the way was
51:28
was packet co-pilot that there's a
51:30
co-pilot style feature here as part
51:32
of a selector AI and you've
51:34
got some news around packet co-pilot
51:36
as well. Yeah so we we
51:38
really wanted to make a community
51:40
tool a tool that uses artificial
51:43
intelligence. to help network engineers in
51:45
a way for free. So on
51:47
our selector.a.i website in the next,
51:49
hopefully by the time that this
51:51
launches, there's going to be a
51:53
packet co-pilot that lets you upload
51:55
p-caps that you would normally, you
51:57
know, open with Wirechart and you
52:00
can chat with them with natural
52:02
language. So as an example, if
52:04
you had a, you know, four-frame
52:06
p-cap of a DHCP conversation, you
52:08
could upload it and say, can
52:10
you help me understand this packet
52:12
capture? And it will break it
52:14
down that this is a DHCP.
52:17
you know, request and acknowledgement and
52:19
here's the source IP and destination
52:21
IP. Now that's just one example,
52:23
but I want people to start
52:25
really thinking about what would you
52:27
ask of your packets if you
52:29
could, right? So we're hoping this
52:32
encourages everyone on the spectrum from
52:34
people just entering the field who
52:36
maybe are just starting to work
52:38
with packet captures to the most
52:40
experienced and seasoned veterans to get
52:42
real expertise type questions. with packet
52:44
captures. So you'll just be able
52:46
to upload your PCAP and ask
52:49
it questions. If memory serves me,
52:51
John, you had a tool like
52:53
this of your very own. Is
52:55
this an iteration of that or
52:57
just basically an expansion of the
52:59
idea about fresh code? Yeah, so
53:01
it's an expansion of the idea
53:03
with a new take. I'm still
53:06
offering the packet buddy as an
53:08
open source tool that people can
53:10
download, right? So it is still
53:12
on my get hub. And it
53:14
was very successful and actually drew
53:16
canon. the CEO of Selector's attention
53:18
to me. He had seen the
53:20
packet stuff and his vision of
53:23
right from before I even joined
53:25
was that I was going to
53:27
join and be an evangelist and
53:29
that we were going to host
53:31
this tool for the community because
53:33
of its power and its ease
53:35
of use and it's just an
53:38
exciting, you know, use of artificial
53:40
intelligence to be actually be able
53:42
to do retrieval augmented generation with
53:44
packet captures. I think it's a
53:46
very interesting tool. Another new packet
53:48
pushes podcast is Ennis for Networking,
53:50
which is aimed at people who
53:52
are just getting started, right? College
53:55
students. So, people brand new to
53:57
the field, they're trying to get
53:59
their heads around the fundamentals, Ethernet
54:01
and IP and just the basics
54:03
to be able to upload a
54:05
PCAP and begin asking plain language
54:07
questions about it could really help
54:09
some people out in that situation
54:12
as well. So, I think it's
54:14
a great tool. Packet Copilot again,
54:16
this is coming out on November
54:18
15th, 2024 that should be available.
54:20
Packet Copilot should be available from
54:22
selector. AI by the time you
54:24
read this. Well, John, thank you
54:26
for being on heavy networking And
54:29
for those of you listening out
54:31
there, thanks to you for listening. If
54:33
you want to find out more from
54:35
Selector, Selector dot AI slash packet pushers,
54:37
and all the information that you're looking
54:40
for should be there. It's actually not
54:42
there right this second is where we're
54:44
recording this, but by the time you
54:46
hear this, you should be able to
54:49
go to selector that AI slash packet
54:51
pushes and find out all kinds of
54:53
additional information. Thank you to selector for
54:55
sponsoring today's episode. And by the way,
54:57
if you do go hit that landing
55:00
page, if you do go hit that
55:02
landing page and find out more information,
55:04
would you let them know that you
55:07
heard about them on packet pushers? We
55:09
would really appreciate, a hat, go to
55:11
store. dot packet pushers.net we've got March
55:13
with just packet pushers branding on it
55:16
as well as branding from your favorite
55:18
podcast like this one heavy networking all
55:20
from our network of 12 different podcast
55:22
for your professional career development we've kept
55:25
the prices pretty low. Career development we've
55:27
kept the prices pretty low we've kept
55:29
the prices pretty low we've kept the
55:31
prices pretty low we yeah we make
55:34
a few bucks on the merch but
55:36
we've kept the prices pretty low we
55:38
make a wherever you know they don't
55:40
ship to wherever you can visit. packetbushers.net/follow-up
55:42
and send us a message and you
55:44
don't have to leave an email if
55:46
you don't want to. Last but not
55:48
least, remember that too much networking
55:50
would never be enough.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More