Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Hello everyone , welcome to
0:02
episode number five of the Learn
0:04
System Design podcast . A
0:06
few of you did reach out about the sound
0:09
mix over the last couple of episodes , so
0:11
I want to let you know that episodes three and four
0:13
have been re-uploaded with a better mix
0:15
. Special thanks to Pallavi
0:17
and Diego , who both reached out
0:19
and mentioned that the music I was playing is
0:22
honestly not just a little loud but also
0:24
distracting . So for
0:26
all future episodes , the only music will
0:28
be the intro and the outro music , none
0:31
during the conversation portions . Thank
0:33
you again to Pallavi , diego and everyone
0:36
else who reached out for this feedback
0:38
. Please feel free to reach out on
0:40
any ways that I could
0:42
improve this podcast and you
0:44
have a better listening experience . Diego
0:47
in particular actually dropped his
0:49
podcast for learning a new language
0:52
, and so I'll also be including that in
0:54
the show notes below if you want to check that out
0:57
. But in today's episode we're going
0:59
to be tackling all things message brokers
1:01
and message queues . I'll be covering
1:03
a few topics this episode that sort of correlate
1:06
with that , but it will all add up to explain
1:08
how a message broker works , when to
1:10
implement it and how to use it
1:12
To do that . I want to first
1:14
touch on microservices and
1:17
message queues , thank
1:34
you
1:43
. Briefly
1:52
, I want to explain an important concept
1:55
that needs to be clear before we
1:57
talk about one of the biggest use cases
1:59
of using a message queue or
2:01
even a message broker . Using a message queue or even a message
2:03
broker . To be clear , a message queue is a queue
2:05
that lives inside a broker and
2:11
you can have multiple message queues inside of a broker . But we'll get into all that . For now , the concept
2:13
I want to talk about is a monolithic application
2:16
versus a microservice application
2:18
. These are both architectures and
2:21
there's a lot of other architectures that we'll dive into
2:24
in a future episode
2:26
, things like event-driven architecture , domain-driven
2:29
, mvc , all of these sorts
2:31
of things but for this episode , I just
2:33
want to talk about monoliths and
2:35
microservice architecture and their
2:37
paradigms today and
2:39
how they're used . Make
2:41
no mistake , most applications
2:44
start off as a monolith
2:46
, and what I mean when
2:48
I say a monolith is just that all
2:51
of the code for your application
2:53
is in one place , usually some
2:55
sort of repository , but
2:58
you know it exists all in
3:00
a single node . If
3:02
you think about like a simple web application
3:04
with a front end and a back end , that
3:07
code for the front end , whether it's
3:09
in React or Vue and that code
3:12
for the back end , whether it's in Django
3:14
or Node or Nest
3:16
or whatever makes you happy . All
3:18
of these live in a single place
3:21
, all the codes right beside each other
3:23
within the same repository
3:25
. And this is great because
3:27
if you need to
3:29
change something , if you need to see , you know
3:31
what the front end is expecting
3:33
on the back end and what
3:35
the request needs to look like . It's very simple
3:38
you can see all of the code in a single
3:40
place . It's great for quickly
3:43
developing things and , especially if you have
3:45
a small team , being able to see what
3:47
all the code's doing in your entire application
3:50
without feeling siloed
3:52
and what I mean by siloed is blocked
3:55
off from what other teams are doing and what
3:57
other code looks like . Having to
3:59
rely on someone telling
4:01
you what a request looks like or calling that
4:03
, rather than understanding what the request
4:06
will look like . And so that's
4:08
great . Until it comes
4:11
time to actually scale at a reasonable
4:13
rate . If your backend
4:15
needs more resources , there's
4:17
no direct way to give only
4:19
your backend more memory . Instead
4:22
, you have to scale vertically , give
4:24
your entire application more memory and
4:26
hope that a different part of the code doesn't
4:28
steal it . That you know I'm
4:31
giving my application more memory and
4:33
hopefully the backend gets
4:35
as much as it needs to . The
4:37
second biggest hurdle you're going to hit
4:39
is code quality , because
4:41
all of your developers are working out of
4:43
a single place , a single repo
4:46
. Once monoliths get too large
4:48
, it is nearly impossible to make
4:50
a change and not have it affect
4:53
another piece of code in your
4:55
repository in an unknown way
4:57
. So why then do
5:00
most applications start off as a monolith
5:02
? The answer is simplicity
5:04
, right , they are much easier to
5:06
work on with smaller teams , as
5:09
I've mentioned , but they're also so much easier
5:11
to deploy . You're not having to deploy
5:13
you know 10 microservices
5:15
or n number of microservices
5:18
just to get everything to work . You
5:20
deploy one thing that you know works
5:22
because you've ran it on a
5:25
single node before . The
5:27
bottom line is , when a monolith
5:30
is small , despite the obvious oxymoron
5:32
here , it is much easier
5:34
to work with for your dev team
5:37
at every level and
5:39
, much like with the world of fashion
5:42
or culture , software architecture
5:44
is ever-changing . It's not
5:46
immune from trends , and one
5:49
of the hottest trends of the last 10 years
5:51
, large in part thanks to companies
5:53
like Netflix , google , meta , have
5:56
been what's called a microservice
5:58
architecture . A
6:01
microservice architecture is simply
6:03
a paradigm that takes the idea
6:05
of a small monolith and implies
6:08
it to the entire application , using
6:10
sort of like a chunk style approach
6:13
. So let's think about that
6:15
small web application that I talked about
6:17
before . You have your front-end code
6:20
and now , in a microservice architecture
6:22
, that's its own service right that can be deployed
6:24
on its own . Then you have a separate
6:26
service for your back-end , which architecture ? That's its own service right that can be
6:28
deployed on its own . Then you have a separate
6:31
service for your backend , which , again deployed on its own . But , most importantly
6:33
, these applications have their own resources and can be scaled independently
6:35
, horizontally , as
6:38
quickly as needed . So
6:40
I'll restate that . So if
6:42
you have the need for more
6:45
back-end services , you can scale
6:47
that horizontally , have duplicates
6:49
that handle it without
6:51
having to worry about whether
6:54
or not the front-end needs more resources
6:56
, or you can scale them both at the same time
6:58
. That also works . It gives you more
7:00
independence on controlling
7:02
the scaling , and
7:05
so one of the biggest drawbacks
7:07
of microservices is
7:10
getting them to work together as
7:12
expected . It's a bit more
7:14
complex than , say , a monolithic
7:16
system because , again , a monolith everything
7:19
is there , it can communicate all
7:21
in the same application . It can communicate all in the same application
7:23
, whereas a microservice
7:25
, you have to either communicate
7:28
via a message request
7:31
or on the local network , or through
7:33
a message broker , which we will talk
7:35
about in just a bit . For
7:43
now , I want to rewind for a minute and talk about what a message queue actually is A queue which
7:45
, if you need a refresher , is just a data structure
7:48
in which the flow of the
7:50
elements serve as a first in
7:52
, first out paradigm
7:54
FIFO for short . If
7:57
you imagine yourself at a bank , the
7:59
first person in line usually is the first
8:01
to be taken care of and then the first
8:03
out the door . The same applies
8:05
for a queue in software engineering . A
8:08
message queue is the same
8:11
, but the elements in this case are
8:13
messages , and these messages
8:15
get processed by a
8:18
service . The queue takes
8:20
a message from one service , in this case
8:22
a producer , and then it's
8:24
piped into a single consumer
8:26
, which of course is another service of some
8:28
kind , and then it's removed
8:31
from the queue . So imagine
8:34
for a moment uploading
8:36
a picture to your
8:38
Instagram or your Facebook or
8:40
any social media site . You
8:42
don't want to have to wait for the
8:45
picture to be uploaded to
8:47
the object storage and then written
8:49
to the database and then returned and says
8:51
okay , your picture is all good , right , like that
8:53
takes a long time for the consumer and if that happened
8:55
, people would like be like this
8:57
. This service is extremely slow
8:59
, right , and that's how it would work
9:01
with a normal request structure
9:04
, whereas something with like a message
9:06
queue , all you have to do is , once
9:08
it's on that queue , return a 200
9:10
to the , to the user . They
9:12
just understand okay , my picture's
9:14
been uploaded , I don't need to worry about what , what else is
9:16
happening . And then on the back end
9:18
, you take it , you put it in the object storage , you put it in
9:20
the database , etc . Etc . And
9:22
then , once one of their friends logs
9:25
online , you just pull it from the database , fetch
9:27
it from the object storage and show it . And
9:30
so that's the difference , right , is , the
9:32
user doesn't have to wait for all of this
9:34
processing to happen . It needs to happen
9:36
asynchronously from the user , because
9:38
, genuinely , the user
9:40
doesn't need to know how long it
9:42
takes to to save the picture somewhere
9:45
or anything like that . What they want
9:47
to know is that their picture is safe and that we're doing
9:49
something to it . As far
9:51
as popular message queues , they come
9:53
in all shapes and sizes . There's
9:56
things like IBM MQ , which I think
9:58
might be the most popular one for like hardware
10:00
solutions and
10:03
for this I kind of want to just focus on
10:05
like external cloud-based solutions
10:07
, and so like a few of those is like . Amazon
10:10
, of course , has their SQS system
10:12
and that's a proprietary solution
10:14
offered by Amazon . Sqs
10:18
is a great solution if you need something hands-off
10:20
. It scales well and
10:23
cost isn't really a concern
10:25
because it can get costly sometimes
10:27
. On the other side of that coin
10:29
, if you need something more open source
10:31
, there's a solution called Celery
10:33
. Celery offers the ability to actually bring
10:35
your own messaging broker and
10:38
then Celery , a message queue , just
10:41
lives inside of that broker . Message
10:43
brokers can be thought of as
10:46
an application that sits
10:48
between any two services
10:50
that are trying to communicate with
10:53
each other . It takes a message
10:55
sent by the producer or the sender
10:57
or any service , and transforms
11:00
that message into something the consumer
11:02
or the receiver basically
11:05
another service can actually use . Basically
11:08
, it changes the protocol
11:10
of the message so that it can be consumed
11:12
properly . Let's
11:15
take a pause for a second and actually
11:17
talk about what a protocol is and
11:19
what sort of lives in a message broker
11:22
sort of the things I'm about to dive into
11:24
and it can get a little messy
11:26
and a little complicated whenever I throw
11:28
a bunch of technical jargon that's all nested
11:30
within each other . So instead
11:32
I'm just going to sort of lay it out and then we
11:34
can dive into it a little more
11:36
. In effect , message
11:39
brokers consist of two types
11:41
of things an exchange and
11:43
a message queue . And
11:47
you can have multiples of exchanges or multiples of message queues , but
11:50
you'll always have at least one . In
11:52
a general sense , we've already
11:54
touched on what a message queue is
11:56
and how it fits into a message broker
11:58
. And on the other end of that , usually
12:01
what comes first in the order of a message broker
12:03
is what's called the exchange into
12:05
that . Usually what comes first in the order of a message broker
12:07
is what's called the exchange . And the exchange
12:09
is basically some code that handles
12:11
a message in a certain way and
12:17
how it handles it is called a protocol . So just to sort of clarify a message
12:19
comes into a broker . That broker handles the message in a certain way
12:21
, aka a protocol . That
12:24
exchange will tell the message
12:26
where to go , so which queue to go to , and
12:28
then , as it goes into the
12:31
queue , whenever the consumer of that
12:33
message is ready , it will take
12:35
it off of that queue from the message broker
12:37
, and we've effectively moved through the whole thing
12:39
. The first and
12:41
easiest to remember protocol
12:44
is called STOMP
12:47
, so STOMP stands for Simple
12:49
Text Oriented Messaging
12:52
Protocol . Its appeal
12:54
is the fact that it's very simple to use
12:56
, especially for scripting languages . It
12:59
works very similar to HTTP
13:01
requests and can even be used
13:04
via TCP and WebSockets , and
13:06
so what this means for you is that you can use
13:09
it in a browser for real-time
13:11
chat clients . Another benefit
13:13
of Stomp is that it is the only
13:15
one of these three protocols that I'm going to talk
13:18
about that are
13:20
actually human-readable instead
13:22
of binary . If you're familiar
13:24
with JSON , if you've ever opened up a JSON
13:26
file to read all the key value
13:28
pairs , it's the same thing . In
13:32
the fact that it's human readable not that Stomp
13:35
is JSON-like , but
13:38
because it's human readable
13:41
, you actually sacrifice on the
13:43
size of a message and the time spent
13:45
sending that message . That's
13:47
why you know stop is great for
13:50
things like web clients where
13:52
, like you , stay connected to the server , but
13:54
something where you might have a disconnect it's not
13:56
as great . Genuinely
13:58
, because you risk
14:01
having a larger message
14:03
that can be broken apart and
14:06
thus it fails . The
14:08
next protocol I want to talk about is known as
14:10
MQTT . It's
14:13
most widely used for embedded
14:15
or low-powered devices where network
14:17
issues are possible . Mqtt
14:20
stands for Message Queue Telemetry
14:23
Transport . It focuses
14:25
on a single pattern of publisher
14:27
and subscriber . One of
14:29
the biggest benefits of MQTT is
14:32
the ability to give it a quality
14:34
of service level , so a QoS
14:36
level can be marked
14:38
as 0 , 1 , or 2 . Qos2
14:42
gives it the highest level of a guarantee
14:44
that a message will be received while
14:47
also using the most amount of network
14:49
bandwidth , whereas something with
14:51
a QoS of zero uses
14:53
the least amount of network bandwidth
14:56
and therefore has the least likelihood
14:58
the message will actually be delivered
15:00
successfully . The other benefits
15:02
of MQTT are retained
15:05
and LWT messages
15:07
. Retained messages are pretty
15:09
much exactly what they sound like . It's
15:12
a message that is retained in a broker
15:14
that is sent to any consumer
15:16
that comes online after
15:18
being down . You can think
15:20
of retained messages as the
15:22
last important message since you've been
15:24
gone . The LWT
15:26
or Last Will and Testament message
15:28
is a little bit different , but
15:31
also extremely powerful
15:33
. While the message itself
15:35
is the same as any other MQTT
15:38
message , it is set by a
15:40
consumer instead of the producer
15:42
and only sent out of
15:44
the event of that consumer disconnecting
15:47
in a non-graceful way . So something like a
15:49
network outage or a
15:51
500 error . To
15:53
compare this to a real life situation
15:55
and explain the name a little bit , imagine
15:59
you and everyone you love is
16:01
a consumer and in this scenario
16:03
you only communicate through letters
16:05
sent through your lawyer . I
16:07
know it's a strange concept , but just follow me
16:09
here . Each of you gives a
16:11
last will and testament message to the lawyer
16:14
and he is not to give it out until
16:17
you pass away . When any
16:19
of you dies , the other
16:21
consumers get this LWT
16:23
letter that you have written . It's
16:26
the same concept with the MQTT protocol
16:28
One of your consumer disconnects
16:30
in some undesirable way and
16:33
every other consumer gets
16:36
your LWT message . This
16:38
helps if
16:40
a service needs to know that another service
16:42
is down . The
16:45
last protocol I want to talk about is
16:47
called the AMQP or
16:49
Advanced Message Queuing
16:51
Protocol . Amqp
16:53
, much like MQTT , uses
16:56
a publisher and subscriber architecture
16:58
where some application
17:00
publishes a message and
17:02
one or more applications can subscribe
17:05
to that message . The
17:07
message broker in this case is what applies
17:09
this protocol when handling the messages
17:11
? Amqp , which is
17:13
mostly used in like baking systems
17:16
, uses exchanges , routing
17:18
keys , bindings and queues to
17:20
help facilitate this whole process . When
17:23
a producer sends a message to the message broker
17:25
. The message broker the
17:30
message goes through one of these types of exchanges that we talked about earlier . These
17:33
exchanges forward the message to a specific queue and finally out
17:35
to any consumer that is subscribed
17:37
to the topic of that queue
17:39
. So I briefly talked
17:41
about exchanges earlier . Let's talk for a minute
17:43
about the different types of exchanges
17:45
that can exist in a broker and
17:48
, honestly , why they're important . The
17:50
four key exchange types are
17:52
called direct , topic , fan
17:55
out and header . Before I dive
17:57
into these exchange types , I need to define
18:00
a few things that are going
18:02
to make it easier for you to follow here
18:04
. Let's start with a
18:06
binding , sometimes called a route . A
18:09
binding in a message broker is just a
18:11
link that matches a specific queue
18:13
to a specific exchange . This
18:16
link is submitted using a binding
18:18
key , which is just a specific
18:20
way of identifying that
18:22
binding . It might be helpful to picture
18:25
a binding as a road and
18:27
a binding key as the name of
18:29
the road . Routing
18:31
keys , which sometimes are confused for
18:33
binding keys , but they are not the same thing
18:35
are a
18:37
message property , depending on the
18:39
type of exchange . It is used sometimes
18:42
to verify which queue a message
18:44
should go down . It is used sometimes to verify which queue a message
18:46
should go down . If we keep the analogy before that we
18:48
had , where a binding is a road , the binding
18:50
key is the name of the road . We
18:52
can consider the message a package
18:55
that needs to be delivered to a certain place
18:57
and that routing key is
18:59
the address on the package
19:01
itself . So
19:08
now , with our newfound knowledge of how brokers handle queues and map the messages
19:10
from exchange , let's dive into the actual types
19:12
of exchanges and how they work . First
19:15
up is a direct exchange . They're
19:18
the simplest type and
19:20
it simply looks at a routing key
19:22
on a message , compares that
19:24
to the binding key on a queue and
19:27
sends it to the queue only if it's
19:29
an exact match . In
19:31
the event where there are no binding keys
19:33
that match , the message is simply
19:35
discarded and it doesn't leave the message
19:38
broker . Next up is
19:40
called a topic exchange . These
19:42
are a little more powerful and better for use
19:44
cases where you have messages that
19:46
should go to more than one queue
19:48
and eventually to more than
19:50
one consumer . Right , topic
19:53
exchanges introduce a sort of
19:55
new concept . We haven't heard . It's
19:58
called a binding pattern . Binding
20:01
patterns are the secret sauce to
20:04
allowing messages to be sent to multiple
20:06
queues . It forces all
20:08
routing keys to be specifically
20:11
separated by single dots ABC
20:15
instead of A , space B , space
20:17
C , and instead
20:19
of having a specific binding
20:21
key that needs to match , you can actually
20:23
use an asterisk or pound sign in your binding key . That needs to match . You can actually use an asterisk
20:26
or pound sign in your binding key to help
20:28
catch potential
20:30
candidates for a queue . The
20:32
two special characters that enable this are
20:35
an asterisk and a pound sign . The
20:38
asterisk or wildcard is
20:40
used to say there will be a single
20:42
word here and I don't care what
20:44
that word is . If we use the
20:47
example from before abc
20:49
, we have a binding pattern that
20:52
is asteriskbc
20:54
. We
20:56
can have any word at the front , regardless
20:59
of length , as long as it ends
21:01
with dot b , dot c , it
21:03
is valid . However
21:06
, if anything follows that
21:09
c , it is invalid . So
21:12
, for example , we can have ben
21:15
dot b , dot c . That's valid . Cat
21:18
dot b , dot c . That's valid , but
21:20
benbcanything
21:24
is invalid because
21:26
it needs to be specifically wordbc
21:30
, because that asterisk only replaces the
21:32
word . The pound
21:34
sign , sometimes called a hashtag
21:36
, covers this scenario where the
21:38
asterisk doesn't . Its
21:41
rules are basically any number of words
21:43
or letters can go here , from zero
21:45
words to an infinite
21:47
amount of words . Imagine again
21:50
this scenario of abc . If
21:52
we add apound
21:54
sign , then abc
21:57
is valid , abcd is valid , dot
22:05
c is valid , a dot b , dot c , dot , d is valid and everything all the way to a , dot b , dot
22:07
c , dot , d , dot . You know infinite number of letters or words . However
22:10
, it always has to begin with
22:13
a , always
22:15
. Next
22:17
, let's talk about fan out exchanges
22:19
. They are perfect if you want
22:21
your message to go to a lot of queues without
22:24
a lot of overhead on patterns
22:26
and keys . With fan out
22:28
exchanges , every time the exchange
22:30
gets a message it will send it to every
22:33
single queue that is bound
22:35
, regardless of keys . In
22:38
fact , if you specify a key on
22:40
these , you're just wasting your time because
22:42
it doesn't even look at it . Fanout
22:45
is a great strategy when you have consumers
22:47
that might care about any message
22:49
that comes through and can handle any
22:51
message in their own way . And
22:54
, to top it all off , there is no limit
22:57
to the amount of queues that you can bind
22:59
to a fanout exchange . A
23:01
real world example here might be some sort
23:03
of breaking news alert . It doesn't
23:05
matter what the news is , doesn't matter
23:07
if it's sports or an
23:09
emergency , every news organization
23:12
is going to want to know about it and display
23:14
it in their own way , and that's when
23:16
fan out exchanges are really helpful . The
23:19
final exchange I want to talk about is
23:22
called the header exchange . If
23:24
you're not familiar with headers in
23:26
HTTP requests , they're
23:29
a simple way of giving information to
23:31
the server describing the encoding
23:34
or the authorization or whatever you want
23:36
. It describes something about
23:38
the request you're sending . You
23:40
can think of them as just little helpful bits of information describing what you're sending . You
23:42
can think of them as just little helpful bits of information describing
23:45
what you're sending . Headers
23:47
in an exchange work the same way
23:49
. They are specialized keys that
23:51
are used to specify which
23:54
queue the message should be routed to . Header
23:57
exchanges are very similar to topic
23:59
exchanges , but instead of using
24:01
binding keys , we just use headers
24:03
on a message . I
24:07
do want to make a special like sort of comment here and say that all
24:09
message brokers use exchanges and queues
24:12
. It's specifically the protocols
24:14
within them is
24:16
how they're used . So while these
24:18
exchanges were mostly covered
24:20
under the AMQP strategy
24:23
, any protocol can use
24:25
these exchanges effectively
24:27
. In our discussion
24:29
about header exchanges , I
24:31
briefly talked about how HTTP
24:34
requests and how message
24:38
brokers both use headers and how they're
24:40
sort of similar . I
24:42
think it would be useful to take some time to talk
24:44
about when to use a message broker
24:47
instead of HTTP
24:49
requests between your microservices
24:52
to help them communicate . When
24:54
you are breaking out a monolithic system
24:56
into a microservice architecture
24:58
, one of the first and easiest
25:01
things to do is have your services
25:03
communicate through HTTP requests
25:06
. Some teams opt
25:08
into having requests go through a normal
25:10
everyday process You're
25:12
just firing it off across the web
25:14
Whereas some teams try to
25:16
utilize more of a local network
25:18
. They both have their pros and cons , where
25:21
working on a local network is great because
25:24
it means all of your nodes are together and
25:26
your requests are really fast , but
25:29
it also means all of your nodes have to be
25:31
on the same region of the same cloud and
25:33
share that same local network
25:35
. However
25:37
, eventually , as with most things
25:39
, the solution of just using
25:41
http requests across
25:43
your system starts to prove very
25:46
restricting . It makes it so
25:48
your services have to be tightly
25:50
coupled with each other and makes them
25:52
very hard to separate . It
25:55
also makes it very hard to handle
25:57
that fault tolerance
25:59
that we talked about in the earlier episode , which
26:02
, in case you don't remember , fault tolerance just
26:04
means if one piece of your system
26:06
goes down , then the rest of your
26:08
application shouldn't be rendered unusable
26:11
. Imagine a scenario
26:13
where you're working on a simple social media
26:15
app . Now imagine you have
26:17
a few different services , but your specific
26:20
services for sending a
26:22
push notification goes down . One
26:24
of your user tries to post
26:27
a meme and everything is great until
26:29
it sends a request to your notification service
26:31
and it waits for that response saying , hey
26:33
, we sent that out . Eventually
26:36
, that call is going to time out and
26:38
your user is going to have a bad experience
26:40
. Now let's imagine
26:42
that exact same system with a simple
26:44
PubSub message broker . Your
26:46
user posts their hilarious meme , the
26:49
message goes to the message broker . The
26:51
broker sends back a 200 saying
26:53
great , everything's good on our
26:55
end , and then processes
26:58
that message into the exchange and
27:01
it sends it through all the proper
27:03
queues that it needs to . All
27:05
the consumers bound to these queues are alive
27:08
, except for the notification
27:10
service . Right , what happens
27:12
in this scenario ? And feel free to pause
27:14
and sort of challenge yourself and and guess
27:17
before I take this answer
27:19
Ready , Okay
27:21
, so remember when I said the message
27:23
goes into the broker . Well
27:26
, like I said , the user at that point gets
27:28
a success because our backend
27:30
has acknowledged that we have that message . So
27:33
then the services are all still alive , except
27:35
for the one . They take the message
27:37
and then process
27:40
them as they need to . Maybe one service
27:42
adds it to the user's feed , one
27:44
adds it to the S3 bucket , but the notification
27:47
service in particular , once it
27:49
comes back online and depending , of
27:51
course , on our protocol , either pulls
27:54
the message from the queue for all
27:56
new messages or grabs the latest
27:58
one that it needs to process
28:00
and sends out that notification
28:02
correctly . A
28:05
much less lengthy way of thinking
28:07
about this is if you're using a
28:09
microservice architecture . When building
28:12
a system or improving a system , are
28:14
there parts of that system
28:17
that should be asynchronous ? Ie
28:20
, if a service goes down , will your
28:22
entire application hang up
28:24
waiting for the response
28:27
? If the answer is ever yes , then
28:29
it's probably a good idea to set
28:31
up a message broker and save yourself
28:33
a lot of heartbreak in the future . And
28:36
with that we've taken a huge step
28:38
into understanding the enigmas
28:41
of microservice architectures
28:43
and message brokers , more
28:45
specifically , how message brokers
28:48
work , why they are really useful
28:50
and , when it comes to scaling
28:52
and fault tolerance , why
28:54
they're one of the best tools for the job . Next
28:57
episode we're going to be jumping into what
28:59
I believe is one of the most important parts of building
29:01
out a microservice architecture , and
29:04
that's load balancing . So if you
29:06
haven't already , go ahead , subscribe
29:08
to the podcast , because I'm
29:10
very excited to talk about them and talk about
29:12
the algorithms that go along with them . I
29:15
really enjoyed everyone's feedback on the last episode
29:17
. I hope these newer episodes improve in
29:20
sound quality and are a lot easier
29:22
and less distracting to hear
29:24
. If you'd like to suggest any
29:26
new topics or even be a guest on the podcast
29:28
, feel free to drop me an email at LearnSystemDesignPod
29:32
at gmailcom . Remember to include
29:34
your name if you'd like a shout out
29:36
. If you would like to support the podcast , help
29:38
me pay my bills , help me pay off my student loans
29:40
or even just help me hire
29:43
someone to do more research , jump
29:46
over to our Patreon . Consider becoming a member
29:48
. All podcasts are inspired
29:50
by Crystal Rose . All music is written
29:52
and performed by the wonderful Aimless Orbiter
29:55
. You can check out more of his music at
29:57
soundcloudcom . Slash
29:59
aimlessorbitermusic . And
30:01
with all that being said , this has been
30:03
and I'm scaling down Bye
30:07
.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More