Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Welcome to Fall Through, a podcast
0:02
about technology, software, and computing. I'm
0:04
Chris Brando, also known as Scriptible.
0:06
And on today's episode, host Ian
0:08
Wester Lobshire is joined by myself,
0:10
Matthew Sinabria, and one of our
0:12
producers, Angelica Hill, to discuss complexity.
0:14
We give our hypotheses as to
0:16
where complexity came from, what potential
0:18
solutions might be, and what the
0:20
future might hold. As always, please
0:22
like, comment, subscribe, and share, and
0:24
interact with us on social media.
0:26
And with that, let's get into
0:28
the episode. Hello everyone
0:30
and welcome to Fall Through.
0:33
Today I have with me our
0:35
regular panel. Matt, how are you
0:37
today? Hello, I'm all good. Coming
0:40
from hotel. Yeah, a new background,
0:42
who'd at? Chris as well. How
0:44
are you on this snowy New
0:46
York day right now? Good, I'm not
0:48
ready for the cold, the intense cold
0:51
is going to be this week. It's
0:53
going to be, it's going to be
0:55
rough. Yeah, it's also a little weird,
0:57
you know, swap in seats here, you
1:00
hosting, me just being a co-host,
1:02
it's a little different. Yeah, I'm
1:04
not sure I like it, but
1:06
we'll power through. And then another
1:09
familiar face you all might recognize,
1:11
Angelica, one of our producers now, how
1:13
are you today? I'm wonderful.
1:15
come on on today and you chose a
1:17
very very good topic so I'm excited
1:19
to dive in. Yeah leads right into
1:22
it. Today we're talking about complexity.
1:24
Kind of where did it all
1:26
come from? I think there's kind
1:28
of been a consensus in the
1:31
community that complexity is on a
1:33
up into the right trajectory right?
1:35
Things are getting more complex and
1:37
kind of in my perspective outpacing. the world
1:40
that we're modeling. Like the complexity isn't
1:42
coming from what we're trying to do.
1:44
It's coming from the technology itself. So
1:46
I think to get started, do you
1:48
all agree with that? Do you have any
1:51
experience with that? I agree with that at a
1:53
high level. I mean, you know, we'll have to
1:55
define what we really mean by that, but I
1:57
would say it's true that the field is the
1:59
most complex. ever been. And it's continuing
2:01
to get more complex. Chris, you agree?
2:03
Yeah. I mean, it's kind of ridiculous
2:05
how complex things have gotten over time.
2:08
Like it's so hard to build things
2:10
now. And I think I said in
2:12
the in the last episode about just
2:14
how like we've gotten so our hardware
2:16
has gotten so much more powerful and
2:18
yet software seems to have gotten worse
2:20
and slower. So yeah, we definitely have
2:22
some complexity problems. I agree at a
2:24
high level. I think there is an
2:26
inevitability to the complexity and then there's
2:28
the avoidable complexity in my mind and
2:30
I'm sure we'll dig into that as
2:33
we have our little chit-chat today but
2:35
I think at a high level definitely
2:37
getting more complex as time goes on
2:39
and we are definitely struggling to keep
2:41
up with that and it is causing
2:43
a lot of detrimental impact to the
2:45
work that we do and real progress
2:47
in my opinion too. Yeah so I
2:49
have a hypothesis as just where some
2:51
of this come complexity comes from. So
2:53
maybe I'll give that all to you
2:55
and see what you think. I think
2:58
a lot of it kind of comes
3:00
from this idea, but there was such
3:02
a demand for developers for so long,
3:04
right? And so we did a lot
3:06
to make kind of the job easier,
3:08
right? We built all these kind of
3:10
general purpose systems to make things smoother,
3:12
right? So I'm talking everything from like
3:14
Kubernetes to general purpose databases to all
3:16
of this. So we get this layer
3:18
of abstraction. that was meant to make
3:20
things easier and faster. And all it
3:23
has really done is added extra steps
3:25
of learning, extra steps of things you
3:27
need to know, things you need to
3:29
do to be productive in the job.
3:31
I think the cloud is probably the
3:33
biggest, single biggest contributor to that, right?
3:35
Like, lambda functions were not a thing,
3:37
right? And they're supposed to be easy.
3:39
But think of all the complexity you
3:41
get with trying to run something on
3:43
land. The cold starts, the weird optimization
3:45
you do. So that's my hypothesis. How
3:48
does everyone else. Is it unpopular opinion
3:50
time? Because I have one. I don't
3:52
know, we just get right into it.
3:54
No, I agree with you. I think
3:56
calling these things abstraction sometimes is part
3:58
of the problem because they're not actually
4:00
abstracting much because they're still forcing you
4:02
to learn what it is they're meant
4:04
to be abstracting. And I think that's
4:06
part of the problem. It's like you
4:08
promised me that if I used your
4:10
tool, I no longer have to think
4:13
about the things that you're abstracting. So
4:15
why do I use your tool? And
4:17
then we do that again and again
4:19
and again and again and again. And
4:21
it's like, I might as well just
4:23
learn to think myself, you know, that's
4:25
where I feel like a lot of
4:27
it does come from too. Yeah, I
4:29
think that also comes from like us
4:31
trying to make things like a lot
4:33
simpler so that we can get more
4:35
people kind of into the industry. So
4:38
we're like, oh, we need all of
4:40
these abstractions to make it super easy
4:42
to spin up, whatever you want to
4:44
spin up or to do, whatever you
4:46
want to do. Like we make it
4:48
easy to the first 10% of the
4:50
first. Oh, when you think about this
4:52
for like 10 minutes, this all seems
4:54
wonderful. But then when you actually have
4:56
to go use it and deploy it,
4:58
it just becomes... like just bathed in
5:00
complexity because you're trying to attach all
5:03
these things together and it's like oh
5:05
we didn't think through this whole thing
5:07
because we wanted to give you like
5:09
a nice little easily marketable like super
5:11
like common sense obvious thing to do
5:13
and it's like common sense obvious thing
5:15
to do and it's like well no
5:17
no this thing doesn't actually work properly
5:19
or doesn't work the way you want
5:21
it to work. Like I think like
5:23
especially things like Kubernetes I think gives
5:25
us a lot of that where it's
5:28
pretty simple. It's actually pretty complex at
5:30
the end of the day. Yeah, we
5:32
can go into a lot for Kubernetes.
5:34
Oh man, there's so much there. Let's
5:36
play a bake for a little bit.
5:38
Let's listen or bake. I mean, I
5:40
think part of that's also kind of
5:42
this idea of roll expansion, right? Like,
5:44
the cloud has given us the ability,
5:46
so like as a single developer, I
5:48
can design my infrastructure to... my infrastructure,
5:50
run my software on it, right? I
5:53
think that's probably a net positive, but
5:55
also like, now I need to be
5:57
an expert on everything. Yeah, I don't
5:59
know. I don't know how I feel
6:01
about that, because it is really nice
6:03
to be on to do everything, but
6:05
it's like, where am I failing, right?
6:07
I don't even know enough to know
6:09
where the shortcomings are. for the first
6:11
10% of your journey, and they'll get
6:13
you there very quickly and get you
6:15
very productive very quickly. But then when
6:18
you want to do that next thing
6:20
and be more productive or go to
6:22
the next step, it's like, oh, by
6:24
the way, let me just here, learn
6:26
all of this stuff, right? Take all
6:28
of this stuff, right? Take the bag
6:30
out and just learn all these tools,
6:32
by the way, before you can move
6:34
forward, and that's overwhelming, and you're like,
6:36
wait a minute. Did I make a
6:38
mistake doing this thing? Going with the
6:40
cloud, going with Kubernetes, whatever the thing
6:43
is? Yeah, I think I have like
6:45
a hot take hypothesis about why we
6:47
have some of this complexity. And it
6:49
starts with we have continually over time
6:51
required people to not just have to
6:53
do fewer things that are lower in
6:55
the stack. but we've required them to
6:57
not have to understand things that are
6:59
lower in the stack. So I know
7:01
on the episode about whether we think
7:03
you should code, we talked about. or
7:05
I talked about at length networking and
7:08
how so many of us don't understand
7:10
how the networking stack works and how
7:12
there's a lot of repercussions, you know,
7:14
that deal with complexity with that. But
7:16
I think that goes almost all the
7:18
way up the stack. Like even people
7:20
that use virtual machines like the, say,
7:22
the GM or like, Python or JavaScript
7:24
don't usually know how the internals of
7:26
those things work. But they're also building
7:28
libraries that become load-bearing infrastructure for the
7:30
rest of us. So if you don't
7:33
have a good understanding of how something
7:35
works, but you are trying to build
7:37
on top of it, you're probably... going
7:39
to wind up building something that doesn't
7:41
that doesn't hit that simplicity that we
7:43
all like that doesn't feel like it
7:45
seamlessly integrates with something and I feel
7:47
like we just keep doing that over
7:49
and over and over again and I
7:51
think this is how you wind up
7:53
with like the current state of the
7:56
industry where you have you know we
7:58
talk about front end and you have
8:00
people that are not JavaScript developers, not
8:02
front-end developers, but react developers. They know
8:04
react, react is all they know, and
8:06
if you want them to build it
8:08
at something and something else, they're not
8:10
really going to understand how to do
8:12
that, or they're going to struggle a
8:14
lot to try and figure out how
8:16
to do that. And I think that
8:18
comes from this, like, oh, well, you
8:21
don't need to know these lower level
8:23
things. Don't bother with that. And the
8:25
ethos in the industry of saying, oh,
8:27
don't reinvent the wheel. Don't go look
8:29
at that thing down there. You don't
8:31
need to look at that thing. Don't
8:33
bother with it. And when you're just
8:35
building an application, that's fine. But when
8:37
you're just building an application, that's fine.
8:39
But when you're building something that becomes
8:41
dependent on by other people, that definitely
8:43
is true of like APIs, like rest
8:46
APIs are a good example, where I
8:48
think rest APIs are just a thing
8:50
where people don't know what rest is
8:52
because they never sat down and read
8:54
the dissertation. And I know that most
8:56
people don't want to read dissertations, but
8:58
it's not that boring of a read.
9:00
But because people haven't read it, they
9:02
misunderstand what it is, and then they
9:04
build these things that are like kind
9:06
of. unwieldy to use, and then you
9:08
wind up with those unwieldy things to
9:11
use, we need to do something better
9:13
now, and then you wind up with
9:15
like GraphQL, which is like a response
9:17
to response of like a misunderstanding. And
9:19
then you just have all this complexity,
9:21
because now you're like, oh, GraphQL, should
9:23
have all of our problems, but it's
9:25
like, now you've introduced all of these
9:27
other problems, because you were trying to
9:29
solve this one problem, we didn't think
9:31
through all of the other ramifications. Pretty
9:33
simple overall, but with GraphQL you have
9:36
to do field level security now, and
9:38
that's a completely different model. And did
9:40
we think about that when we were
9:42
going to, you know, trash or arrest
9:44
API to put in GraphQL API? Probably
9:46
not. Or if we are using Kuberneti,
9:48
it's another similar thing of GRPC. Like,
9:50
oh, we're going to use GRPC instead
9:52
of rest. Well, did we think about
9:54
the ramifications of how GRPC is designed
9:56
compared to how rest is designed, compared
9:58
to how rest is designed, compared to
10:01
how rest is designed, compared to how
10:03
we should maybe be doing networking when
10:05
we're doing containers on shared machines instead
10:07
of just dedicated machines talking to each
10:09
other. But like, we didn't think about
10:11
a lot of a lot of these
10:13
things. tremendous amount of complexity. So my
10:15
hypothesis to recap is that people didn't
10:17
learn deep enough into the stack and
10:19
because of that they're making things that
10:21
don't fit with things that are lower
10:23
in the stack and those not fitting
10:26
pieces wind up creating the complexity that
10:28
we all feel. But do you think
10:30
it's a lack of willingness to take
10:32
the time to comprehend or do you
10:34
think it's there's the willingness but not
10:36
the time? and the ability to really
10:38
dig because Chris a lot of the
10:40
basically you kept on saying like they
10:42
didn't think about this they didn't think
10:44
about this do you think it's that
10:46
software engineers today are not expected to
10:48
know as much or is it that
10:51
they're not given enough time i.e. we're
10:53
setting unrealistic expectations of how much one
10:55
software engineer one team can comprehend especially
10:57
in a world where everyone's supposed to
10:59
be like a full stack engineer but
11:01
Maybe this is on top of your
11:03
pin. I don't think anyone actually can
11:05
be. I feel like you're an engineer
11:07
who specializes in a certain area and
11:09
you're okay can get by with the
11:11
other areas, but would be interested to
11:13
hear, I guess, broadly. Do you think
11:16
it's a time issue or a lack
11:18
of willingness from those who are entering
11:20
into the industry now? Because I don't
11:22
feel like they need to. Yeah, I
11:24
think sometimes it can be a time
11:26
issue. I think for the most part,
11:28
though, like when I mention things like...
11:30
GraphQL or GRPC. Like when I say
11:32
they didn't, you know, think about things.
11:34
It's not that like for their problem
11:36
they were solving. I'm sure they thought
11:38
about it very deeply. But when that
11:41
technology got pushed out to everybody else,
11:43
that's a completely different context than what
11:45
you were in. I'm sure GraphQL solved
11:47
a real problem that Facebook was having.
11:49
Same thing with React. It solved a
11:51
real problem that they were having in
11:53
their context for the path that they
11:55
took to get to where they are.
11:57
But that's not the path that most
11:59
people wind up in. Same thing with
12:01
GRPC, right? All of these technologies, when
12:03
they were created, I do feel that
12:06
the people that created them. likely did
12:08
think through the problem domain and likely
12:10
did understand their problem domain very well,
12:12
but that does not necessarily mean it's
12:14
a good thing to translate to the
12:16
rest of us because we all have
12:18
different problem domains. So when you generalize
12:20
that thing, that's where you need to
12:22
do the extra thinking to say, oh,
12:24
does this still fit? And I think
12:26
a lot of the time the answer
12:28
is no. Like should you be hyper
12:31
optimizing your queries to avoid having to
12:33
do extra API request? Well, if you
12:35
only have to do four or five
12:37
rest API requests to get all the
12:39
data you need, no. It's like if
12:41
you have to do 200, yes, you
12:43
probably want to turn that into a
12:45
small number of GraphQL requests. But if
12:47
you're not doing a large number of
12:49
them, or if you're not, you don't
12:51
need to have like a super low
12:53
latency application, then that doesn't make a
12:56
whole lot of sense. Yeah, I think
12:58
it's also. Like I think it's both
13:00
right like it's time and willingness for
13:02
sure depending on who you're talking to
13:04
and what's going on right because the
13:06
business is always going to say Business
13:08
goals first do the business thing get
13:10
the thing up to door ship it
13:12
ship it and that's always going to
13:14
be like you know intention with simplicity
13:16
and thinking through your designs and and
13:18
understanding the abstractions that you're settling for
13:21
it's always going to be attention with
13:23
that but to Chris's point I'm sure
13:25
that all these technologies that you've mentioned
13:27
did solve problems for the companies that
13:29
created it, but they also had different
13:31
constraints, like you say. So if we
13:33
apply those same tools to solve the
13:35
problems that we have, the problem might
13:37
look similar, but our constraints are much
13:39
different, or our circumstances are much different.
13:41
Saving, you know, taking four API requests
13:43
down. to three might actually be significant
13:46
in something like meta when you're handling
13:48
millions of requests or whatever, but your
13:50
company that makes no money and has
13:52
no users doesn't matter. It really doesn't
13:54
matter, right? Because it's not the source
13:56
of your problem. And I think a
13:58
lot of us look at that and
14:00
they say, oh, well, Facebook's using it
14:02
or meta or whatever their name is
14:04
now or Google's using or whoever's using
14:06
it. It must work for everybody. We
14:08
have to do it too. It's like,
14:11
you're not them. It's okay to not
14:13
be them and not take down that
14:15
complexity just because they happen to do
14:17
it. Yeah, I do think there's like
14:19
almost like a, I don't know, it's
14:21
like trendy to build things like your
14:23
fang, right? I think this is kind
14:25
of a hot take, but I don't,
14:27
like no knocking, like I don't, I
14:29
don't mean to like insult fang companies.
14:31
They have like a lot of path
14:33
dependency that they're dealing with. A lot
14:36
of these companies are old, like a
14:38
lot of these companies. I don't think
14:40
you could objectively say they do the
14:42
best engineering. I think they do good
14:44
engineering. I think they do a lot
14:46
of good software development. But because of
14:48
the path dependency they have, they have
14:50
to do things in particular ways to
14:52
deal with all of the stuff they've
14:54
already built. Which means that they can
14:56
do the best for themselves, but not
14:59
best generally. Like the people that build
15:01
the best things generally are people that
15:03
are people that are building new things
15:05
now. That's like a continuously rolling thing.
15:07
And I think a lot of us,
15:09
a lot of the industry has gotten
15:11
in its head that, oh, these companies
15:13
are big and successful. So what they
15:15
did must be great. But it's like,
15:17
you don't know why they're successful. You
15:19
don't know if what they have decided
15:21
to do is benefiting them or detrimenting
15:24
them, because they are behemoth organizations that
15:26
are very old. If they do something
15:28
that's a plunder, it could take years
15:30
or decades for it to actually collapse.
15:32
Whereas if you adopt that thing, it
15:34
could collapse for you instantly, right, or
15:36
it could collapse for you very quickly.
15:38
So I think we as an industry,
15:40
like, I guess I want to provide
15:42
some pushback and say, like, like, like,
15:44
best engineering in the world and start
15:46
actually. giving like context to what what
15:49
they are and what they're doing. I
15:51
think that's important as well because when
15:53
we don't do that we wind up
15:55
with a lot of just just too
15:57
much complexity. But Chris levels the FYI
15:59
says I should work at fang if
16:01
I want the ISCC. What are you
16:03
talking about? Money talks right? What's going
16:05
on here? Yeah. Sorry go ahead go
16:07
ahead Ian. I think people also forget
16:09
that like Facebook didn't start with graph
16:11
QO and react right and you're building
16:14
for longevity that maybe is not needed
16:16
and it's probably a detriment, right? Like,
16:18
I mean, I work at a small
16:20
company and the idea of like building
16:22
something less good than I possibly could
16:24
and then having to go back and
16:26
change it later, like, isn't appealing because
16:28
getting someone to say like, oh, this
16:30
needs optimized in a way to handle
16:32
our new scale, like getting that kind
16:34
of project through is very difficult. Right?
16:36
Like, nothing will change to the users,
16:39
but I need six months to make
16:41
sure this doesn't fall over. That looks
16:43
bad on me. It's like the thing
16:45
you built previously is broken. Like, you
16:47
mean, you need six more months to
16:49
make it scale to our current users?
16:51
There's like a systemic problem there, right?
16:53
Where it's, you can't build for your
16:55
current scale, because if you do scale,
16:57
you're not gonna get the opportunity to
16:59
fix it. So that track what I'm
17:01
trying to be bitten by it as
17:04
well. My only my pushback to that
17:06
argument is sure you should be like
17:08
building whatever you're building you should be
17:10
thinking about how it's going to be
17:12
used and like kind of account for
17:14
that future not so much show that
17:16
you're over optimizing early but enough where
17:18
you're like okay I can see on
17:20
the horizon if we meet our goals
17:22
if our company does well here's where
17:24
we'll be I can I can project
17:26
to that but also tech is not
17:29
the only thing that should be like
17:31
Built for the future or that will
17:33
that will fail as you scale there
17:35
are so many businesses that over index
17:37
on my tech has to be ready
17:39
for a million users It's like what
17:41
your sales team isn't ready for a
17:43
million users your support team isn't ready
17:45
for a million users None of your
17:47
companies ready for a million users outside
17:49
of the tech forget about the tech,
17:51
you want a million users, like what
17:54
good is having a million users and
17:56
the site's up, but they're all pissed
17:58
off because they can't interact with support
18:00
or something. Like thinking about it that
18:02
way too, you know, and I don't
18:04
know, I think a lot of companies
18:06
way over index on the tech when
18:08
it's just a part of the business.
18:10
But I think the over index on
18:12
the wrong metrics on tech as well,
18:14
so I think it's like, oh, we
18:16
need to scale to a million users
18:19
to a million users at them, like,
18:21
like, like, like, like, like, like, like,
18:23
like, what does, what does, what does,
18:25
what does, what does, what does, what
18:27
does that mean, what does that mean,
18:29
what does that mean, what does, what
18:31
does, what does, what does, what does,
18:33
what does, what does, what does, what
18:35
does, like, what does, what does, what
18:37
does, what does, what does, what does,
18:39
like, like, what does, Right, like, you
18:41
need to say more than just like,
18:44
oh, a million users can use a
18:46
site. I'm like, well, if a million
18:48
users are on the site and you
18:50
have like one second latency to load
18:52
your web page, well, that's not great.
18:54
So is it like, oh, you have
18:56
to actually add more context and parameters
18:58
to things. And then everything starts getting
19:00
complicated, right? Because you're like, well, what
19:02
does it mean, like, you know, to,
19:04
to, be able to scale, like are
19:06
we saying one server should be able
19:09
to handle a million users? Are we
19:11
saying we have this distributed system? Okay,
19:13
under like what constraints does that distributed
19:15
system work for, you know, a million
19:17
users? Like how much network failures can
19:19
we have? How much, you know, how
19:21
much network failures can we have? How
19:23
much, you know, how many drives can
19:25
we have, how much, you know, how
19:27
many drives can be failing? of this
19:29
technology, but the nuance is just humongous
19:31
when you want to get to any
19:34
reasonable sense of scale. So I also
19:36
kind of want to look at the
19:38
other side of this, right? I mean,
19:40
so complexity is going up, right? I
19:42
do get the sense that, like, in
19:44
some regards, it's good because the complexity
19:46
comes from, like, being concerned with correctness,
19:48
right? Like, having fault tolerant databases and,
19:50
like, event systems that don't lose data.
19:52
I feel like we have made progress
19:54
there, right? So do you think some
19:56
of this complexity is coming from just
19:59
better practice, like more robust systems? I
20:01
really want that to be true, but
20:03
I don't think it is. Like I
20:05
really want it to be that like,
20:07
oh, we're building these more robust things,
20:09
and that's where this. complexity comes from
20:11
like I yeah, I don't know you
20:13
can get really far and reliable With
20:15
simple systems you really can like I
20:17
understand everyone wants five-nines plus of availability
20:19
and whatnot and they can't take downtime
20:21
ever It's unacceptable. It's like well. I
20:24
think you're lying to yourself that it's
20:26
unacceptable because what it takes to get
20:28
there is very complex You know like
20:30
the systems that the hyperscalers run that
20:32
Chase Five Nine's, they're very complex systems
20:34
and they have very complex teams behind
20:36
it to manage them. So is that
20:38
what you want or are you okay
20:40
taking like the Four Nine's availability with
20:42
a much simpler system that you can
20:44
actually manage but the tradeoff is that
20:46
it could be downtime? And I think
20:49
a lot of companies, they just missed
20:51
that and they say it's for reliability
20:53
of data or whatever, but they're just
20:55
kind of like... putting that facade on
20:57
to try to convince themselves that they're
20:59
more important, that the business is more
21:01
important than what it actually really is
21:03
under the hood. You know, and it
21:05
sucks to say that, right? It sucks
21:07
to say like, I have a business.
21:09
It's okay if it's not up. Like,
21:11
no business wants to hear that, but
21:14
sometimes it's okay. I feel like if
21:16
you want, like, high availability, like, you
21:18
know, you want five nines. Like, I
21:20
think the thing that gets missed out
21:22
of that gets missed out of that.
21:24
The common denominator between all those main
21:26
companies is the systems that they use
21:28
to get that 5-9s are all systems
21:30
they built themselves. And whenever anybody else
21:32
is like, oh, we want 5-9s, let's
21:34
go use somebody else's system. Like you're
21:36
not using the same logic that they
21:39
used to get the systems that use
21:41
5-9s. You're just trying to take their
21:43
5-9 systems and use them yourself. But
21:45
Borg worked. So I get to use
21:47
Kubernetes. What do you mean? That's what
21:49
they promised me. They're like, who would
21:51
ideas? We'll solve our problems. And I
21:53
also think there's this, this thing that's
21:55
happened where. the base has shifted of
21:57
like subtle things underneath us have moved
21:59
and we've recognized that and I think
22:02
that spawns complexity in in a lot
22:04
of ways. Like I think a good
22:06
example of this is containers, right? I
22:08
mean, I talk about network too much.
22:10
I'm sorry listener. I like talking about
22:12
network. Okay, so like the containers and
22:14
networking when you when you when you
22:16
think about it. I think most people,
22:18
because the argument the marketing was all,
22:20
oh, containers are just like smaller VMs.
22:22
You should just treat them like smaller
22:24
VMs. But containers are not smaller VMs
22:27
when it comes to networking because the
22:29
containers are running on a VM, which
22:31
means that the containers are all sharing
22:33
a single Nick, which means that the
22:35
way you do networking should be completely
22:37
different than how you would do it
22:39
for things that aren't sharing the same
22:41
Nick. But how many of us have
22:43
designed systems that actually... take that into
22:45
account. Take both the advantages and the
22:47
disadvantages of that into account. Probably not
22:49
most of us. We just kind of
22:52
like, oh, let's just keep using TCP.
22:54
And it's like, okay, well, now you're
22:56
one box that's talking to this one
22:58
other box has like 3,000 TCP connections
23:00
between it. Why? Why do you have
23:02
that? Can you do that before? Can
23:04
you change how you do networking so
23:06
you don't need to make all these
23:08
extra TCP connections and have all the
23:10
extra TCP overhead? you know if you
23:12
go into a company like I've done
23:14
this before you go into a company
23:17
and you suggest maybe we shouldn't use
23:19
Kubernetes or maybe we shouldn't use GRPC
23:21
or maybe we shouldn't use where we
23:23
should try doing this in a different
23:25
way people are usually like well we
23:27
can't do we can't do that like
23:29
that's not that that's not the that's
23:31
the thing that everybody else is using
23:33
don't do this other thing so I
23:35
think that's also where a lot of
23:37
this complexity comes from what changes that
23:39
tech entails for us, and then we
23:42
are kind of surprised when, oh, it
23:44
didn't work the way we wanted to
23:46
work, or didn't give us those five
23:48
nines, or didn't give us the availability
23:50
or the redo- the reliability, the redundancy
23:52
that we wanted. It's interesting you say
23:54
that. I do think like the cloud
23:56
has giving us the opportunity to chase
23:58
these things too, right? Like I can
24:00
spin up my application in one region
24:02
one availability zone, but just with a
24:04
little bit of extra complexity, you know,
24:07
I can spin it up in two
24:09
regions and eight availability zones, right? And
24:11
you know, it just looks like this
24:13
little bit of complexity. But then you
24:15
do that... 70 different places and all
24:17
of a sudden you have a lot
24:19
of complexity, right? So there's these like
24:21
little promises that I think pop up
24:23
that just like compound and compound. But
24:25
is that not worth it though? I
24:27
don't know if it's worth it. Like
24:29
I think that there's an argument for
24:32
where some of the complexity is very
24:34
much worth it. Like I'm sure there
24:36
are situations I've certainly been in where
24:38
there's like we could do it this
24:40
way, very simple. this is what it'll
24:42
give us, but if we just add
24:44
this additional little thing, whether it's like,
24:46
as you study and like adding that
24:48
additional region, I feel like there are
24:50
instances in which that complexity is all
24:52
is the good kind in my mind
24:54
of complexity, where it's valid, it gives
24:57
you enough pro to that decision, and
24:59
yes, fair, if you continue to do
25:01
that and continue to do that, you're
25:03
gonna, it's gonna be much more complexity
25:05
than maybe you were prepared to handle,
25:07
but I do think there is an
25:09
argument. and there is a categorisation of
25:11
complexity in this conversation that's like positive
25:13
in a lot of ways I think
25:15
it's more and maybe back to your
25:17
point Chris around like thinking it through
25:19
and really making sure that complexity is
25:22
correct and right for your use case
25:24
for your problem you're solving and you're
25:26
not trying to smudge like a square
25:28
peg into a round hole and just
25:30
saying it fits because it you know
25:32
it's not gonna cover most of it
25:34
even if there are these cracks. So
25:36
I'd be interested to hear from from
25:38
you all on your views that. Like
25:40
I think there is a distinction between
25:42
the positive complexity and the negative. I
25:44
just don't think we've got very good
25:47
at distinguishing the two. No, I would
25:49
100% agree with distinct. between the two
25:51
is the hard part. I think that
25:53
any system that grows over time is
25:55
inherently gonna become more complex, like by
25:57
definition, right? It has to, right? You
25:59
can't, you can't grow and keep it
26:01
the same simplicity, it just, it just
26:03
doesn't exist. But over time, like, as
26:05
you choose to like increase complexity in
26:07
your system for growth, you should be
26:09
thinking about that like refactoring or that
26:12
maintenance work to kind of. get back
26:14
some simplicity, right? Can I re-archate this?
26:16
Can I do this differently? You should
26:18
be thinking about that as you work
26:20
on your system. And like, with the
26:22
region example, yeah, sometimes you do need
26:24
multi-region redundancy and whatnot, but you should
26:26
be thinking it through, but not a
26:28
lot of people do. And they end
26:30
up doing simple mistakes like putting their
26:32
S3 bucket in a global like region
26:34
that's really in a US East 1,
26:37
and they think they have multi-region, but
26:39
they don't. or the DNS is in
26:41
one region and that goes down, or
26:43
things of that nature. It's like if
26:45
you're going to accept the complexity in
26:47
the name of like reliability or availability
26:49
or whatever it's going to be, you
26:51
need to make sure like it cuts
26:53
through all the layers, you know, or
26:55
at least enough layers to where you're
26:57
comfortable of taking that risk. Because so
26:59
many people don't. They're just like click
27:02
easy to multiple regions, we're done. It's
27:04
like, no, there's more to do, I
27:06
swear. a cache that all the regions
27:08
talked to it lived in multiple regions.
27:10
And eventually we realized, hey, wait a
27:12
second, what if we just have a
27:14
separate cache in each region and they
27:16
don't know about each other? And that's
27:18
fine, right? I think a lot of
27:20
the complexity that we face, I think
27:22
software specifically, comes from wanting to move
27:24
faster, or really not even want to
27:27
move faster, but wanting to think less.
27:29
Right? Like I think databases are a
27:31
good example of this. Most of the
27:33
time people will say... just use postgras.
27:35
Let's just go use postgras, but I
27:37
think people don't understand that if you
27:39
just say just use postgras, you're tying
27:41
yourself to like a single region. sort
27:43
of thing. Like as soon as you
27:45
want to do a kind of more
27:47
distributed type of thing, you're going to
27:49
run into giant amounts of pain because
27:52
PostGaz was never designed to be a
27:54
distributed sort of thing, right? It's like,
27:56
no, you run this on a surfer
27:58
and it just kind of, you can
28:00
like replicate it so you can have
28:02
some sort of failover, but you know,
28:04
failover is like a very, like a
28:06
small corner of what it means for
28:08
an application to be distributed. doesn't really
28:10
mind up being resilient at the end
28:12
of the day. It just means that
28:14
instead of having an operator come in
28:17
and flip a switch, flips a switch
28:19
itself. And I think that a lot
28:21
of the time, even though we say
28:23
that people should just use post-cres, you
28:25
could probably get away with writing your
28:27
own thing on top of maybe SQL
28:29
or BoltDB or something like do you,
28:31
how much of SQL do you need,
28:33
and how much is SQL like helping
28:35
you? Because I think we as an
28:37
industry sit here and we're like, oh
28:39
yes, of course you're going to use
28:42
SQL. That's the thing that makes sense.
28:44
But if you look at the distributed
28:46
systems research and kind of like the
28:48
database research, that's not the style of
28:50
database that I think a lot of
28:52
us are going with or makes a
28:54
lot of sense. Like I think Datomic
28:56
is a good example of the database.
28:58
It's like, yeah, no, we don't use
29:00
this SQL thing. We're talking about facts
29:02
that are asserted and retracted and retracted.
29:05
of what you want to have at
29:07
the end of the day. And when
29:09
you just have facts you're moving around,
29:11
those are a lot easier to move
29:13
around than like the concepts of SQL
29:15
queries and the way that those get
29:17
the way that we tend to optimize
29:19
them. So I think like even that
29:21
first decision of saying, oh yes, we're
29:23
just going to not only just use
29:25
a database by default, but by use
29:27
SQL database by default, I think you
29:30
wind up pulling in a whole bunch
29:32
of complexity that you... you might not
29:34
realize you're pulling in, and you might
29:36
not reap the benefits that you think
29:38
you're going to get out of it.
29:40
Like if you don't have query patterns
29:42
that are heavier, you're not really utilizing
29:44
all of the things that post-grays can
29:46
do, and you're just like, I'm just...
29:48
storing simple data, then you've brought in
29:50
a whole bunch of complexity, you've restricted
29:52
what you can do in the future,
29:55
at least in a cost effective manner.
29:57
You can go use one of Amazon
29:59
or some cloud providers, redundant and resilient
30:01
version of post-gras they have and pay
30:03
an arm and a leg for it.
30:05
But you kind of made these decisions
30:07
up front. I don't think people have
30:09
actually thought through. that whole set of
30:11
circumstances and that complexity. Because I think
30:13
they're just told, like, no, this is,
30:15
I think that in this case, that
30:17
gets guised as simplicity. It's like, this
30:20
is the simple choice. Just pick postgras.
30:22
It's easy. It's simple. You have a
30:24
database now. You can move on to
30:26
other things. And it's like, actually, you've
30:28
just let the complexity monster into your
30:30
house. Like, have fun. And God forbid
30:32
you wind up scaling. Like, I remember
30:34
listening to the Oxite and Friends episode
30:36
where they talked about, like, why they
30:38
went with Cockroach and didn't go with
30:40
Postcraz, because they're like, yeah, we started
30:42
scaling, and then all of a sudden,
30:45
all of these demons popped out, and
30:47
like, it was just, it's just a
30:49
monster, and it was just a mess.
30:51
So it's like, yeah, it's probably not
30:53
giving you what you want at the
30:55
end of the end of the end
30:57
of the end of the day, and
30:59
a lot of the day, and a
31:01
lot of that I think comes from
31:03
people, I think comes from people, I
31:05
think comes from people not. thinking or
31:07
not wanting to think as much or
31:10
like not wanting to push back on
31:12
the general ethos of the industry that
31:14
you know just says go use this
31:16
thing. I think you kind of hit
31:18
on an interesting point there like even
31:20
pulling in relational data into like your
31:22
models adds a complexity that like we
31:24
don't really think about right like a
31:26
key value store is very simple when
31:28
I start adding this relational data and
31:30
I can do things that I couldn't
31:32
with a key value and I guess
31:35
what I'm getting at here is it
31:37
might not even be like we don't
31:39
want to think about it. It's like
31:41
we don't know the complexity. Like it's
31:43
not a muscle we flex, like knowing
31:45
what complexity we're bringing in accidentally. I
31:47
mentioned a few places where complexity kind
31:49
of hides or shows itself. You know,
31:51
Kuberneti is the cloud, networking, databases. Where
31:53
else have you all seen complexity creep
31:55
in? know, work with others or whatever.
31:57
Where else have you seen this? I
32:00
actually think there's a lot of complexity
32:02
that creeps in just because of our
32:04
like human organizational structures, right? You've all
32:06
heard people talk about microservices as a
32:08
not a text solution. They're like a
32:10
human communications solution, right? And like that's
32:12
just one example. I think there's quite
32:14
a bit where it's like this team
32:16
handles that. So I do this work
32:18
around so that I don't have to
32:20
talk to this team to do a
32:22
thing, right. I think a lot seepsips
32:25
in there. How about you Angelica? Any
32:27
other place you've seen it? Yeah, I
32:29
mean, I would say, I think two
32:31
places, one is what you mentioned. I
32:33
think often we organize, and we've actually,
32:35
my team specifically have just gone through
32:37
a reorg because of this, is we
32:39
went through a number of years where
32:41
we were organizing our teams based on
32:43
our architecture, or the other way around
32:45
where we went, okay, we have, for
32:47
example, my teams, there's four teams, and
32:50
they all have their own domains. And
32:52
so they build with those domains in
32:54
mind. So I'll have conversations where my
32:56
engineering team will come to me and
32:58
say, hey, we've decided to build it
33:00
in this way. And one of the
33:02
reasons of the list, you know, there
33:04
are technical reasons too, they're very smart
33:06
engineers, but one of the reasons that
33:08
makes it on the official tech brief
33:10
is so that we don't have to
33:12
talk to this other team. or we
33:15
don't have to have a dependency on
33:17
this other team. Or hey Angelica we
33:19
could do it this way, this is
33:21
probably the right way to do it,
33:23
but like we don't want to do
33:25
it that way because we don't want
33:27
to have a dependency, we don't want
33:29
to be deeply coupled with this other
33:31
system, this other service, even within our
33:33
own architecture there are conversations we're like
33:35
okay well we have these two services,
33:37
if we can avoid coupling them we'd
33:40
already want to. Why? I don't have
33:42
to like mess with the dependencies that
33:44
you know if we have to make
33:46
a change in this API later we
33:48
don't have to worry about it. Not
33:50
even if that's the correct technical path.
33:52
I think there's a lot of of
33:54
that so you know I think you
33:56
hit the mill in the head there.
33:58
I also think there's a lot of
34:00
like feature creep that happens and I
34:02
smile as I say this because I
34:05
am accountable definitely for causing some of
34:07
this where product manager like me will
34:09
come to my team and say the
34:11
board has decided they're really excited about
34:13
this new AI ML feature. It's usually
34:15
some new technology that you know someone
34:17
high up has heard about and got
34:19
very excited about with no understanding of
34:21
the complexity or whether in fact we
34:23
are the right team to be building
34:25
it or doing anything about it. Like
34:27
hey Angelica you need to make this
34:30
happen you have a month, do it. We
34:32
have a conversation and the... because the
34:34
framing of the conversation is hey we've
34:36
got a month to do this we
34:38
have to get something done instead of
34:40
thinking as an engineering team what is
34:42
the best solution okay should we even
34:45
be doing this work it's like okay
34:47
well how can we like tag this
34:49
on to what we already have how
34:51
can we use something that was never
34:53
supposed to be for this reason to
34:55
somehow make it work and then of
34:57
course you do that you add that
35:00
tech debt the feature is you know wonderful
35:02
because you've done it the best way you
35:04
can you've made it work so externally everyone's
35:06
like great it's working and then they go
35:08
okay great so we're not going to use
35:10
this for the next three years and we're
35:13
going to get all the other teams using
35:15
it too. And then you have somewhat what
35:17
kind of like what Chris you mentioned is
35:19
they think oh great so Angelica's team has
35:21
done it this way. We're just going to
35:23
go into their code copy paste. We'll just
35:26
make it work. They did it this way
35:28
so it must be working for them. And
35:30
then you just have everyone implementing
35:32
code that no one has really
35:34
had the time bluntly to think through in
35:37
the right way. And you have these patterns.
35:39
And often it does come from, you know,
35:41
I've seen this happen on my team, we're
35:43
like, oh, this other team did this, we
35:45
should just do it that way. Like we
35:48
don't even need to write a tech brief,
35:50
this is a problem that's already been solved,
35:52
why are we reinventing the wheel? Let's just
35:54
go in and see how they did it
35:57
and literally copy paste this code, make a
35:59
few changes. see if it works without
36:01
thinking about that long term. We need
36:03
like an ad segment here like a
36:05
lawyer segment if you or someone you
36:07
know is hurt by a product manager
36:09
somewhere please let us know reach out.
36:11
Let us say this in two minds
36:13
because in how director managers defense there
36:16
is lack of empathy on the highest
36:18
levels because a lot of the time
36:20
people don't care. about how it's done,
36:22
bluntly. They don't care about the details,
36:24
they just care that it's done and
36:26
that it hits the metrics that they're
36:28
looking for. And regrettably that, yeah. Yeah,
36:30
I think that's where I think Chris
36:32
is getting at earlier about, like, we're
36:34
measuring the wrong metrics, it's a willingness,
36:36
probably, or time problem, all of that
36:38
comes into play, and your first example
36:40
sounded more like trading complexities. Right? I
36:42
don't want to interact with this team,
36:45
so I will take on the burden
36:47
of complexity or I'll trade one complexity
36:49
for another. And that's actually a pretty
36:51
interesting point to go by because I
36:53
haven't really thought about it that way
36:55
of trading complexities until you brought up
36:57
that point. I have nothing to say
36:59
there. I just want to say it
37:01
was a good point. That's really interesting.
37:03
Like having one database per microservice, per
37:05
service, right? That's a... place where you
37:07
trade a lot of complexity, right? Like,
37:09
I have to replicate this now, but
37:11
it's easier to reason about. And just
37:14
going back, like, I just don't think
37:16
we know the repercussions and haven't, like,
37:18
flex that muscle to know, like, am
37:20
I making the right trade? Which one
37:22
is more complex? Which one's not, right?
37:24
And I think the only way you
37:26
learn that is by making mistakes. why
37:28
you made the decision to move forward,
37:30
so that when you do reach the
37:32
point where you're like, hey, this is
37:34
no longer works for me, you could
37:36
at least refer to that and say,
37:38
here's why I did it, and I
37:41
can go the other way now. If
37:43
you document that or write your RFD
37:45
or whatever, I think that helps mitigate
37:47
that a little bit, maybe not mitigate
37:49
it, but help you pivot when you
37:51
need to pivot later on. And you
37:53
learn more that way, I would say.
37:55
Because I don't remember why I made
37:57
this decision three years ago when I
37:59
can go back and look at it
38:01
and say, like, this was a wrong
38:03
decision. Why did I make it? Exactly.
38:05
I think there's a root here. There's
38:07
like a root of the complexity. didn't
38:10
really think about it until you met
38:12
we're kind of like oh where else
38:14
have you seen complexity inside of just
38:16
the tech and I think that root
38:18
complexity is that we as an industry
38:20
are abysmal to like a kind of
38:22
ridiculous degree at organizing information like we
38:24
are very very very bad at actually
38:26
Not just really, we're really good generating
38:28
information. We're bad at knowing where the
38:30
information is at any point in time,
38:32
right? Because I think all of the
38:34
problems we have talked about have to
38:37
deal with there is probably some information
38:39
somewhere that we could be consuming to
38:41
make the systems we build less complex,
38:43
but we just don't know where it
38:45
is, right? That's the whole, like, you
38:47
know, if you look at the systems
38:49
that Fang builds, they're giving you the
38:51
code. any of the history of how
38:53
they wound up with that code. When
38:55
you're talking about things like what you
38:57
were talking about, Angelica, where it's like,
38:59
oh, the higher ups, you want to
39:01
do things this way. Well, the higher
39:03
ups have no contacts. Like, every company
39:06
I have worked at, the higher up
39:08
people, have very little understanding of what
39:10
it is that the actual engineering teams
39:12
are doing, and the engineering teams are
39:14
very bad at providing information to those
39:16
execs. about what they're doing. And every
39:18
time that I've sat down with an
39:20
exact or someone that's high up, and
39:22
I've brought the information to them and
39:24
said, this is why we shouldn't do
39:26
it this way. This is how much
39:28
time we need to do it the
39:30
right way. This is and just laid
39:33
out all the information. They're like, oh
39:35
yeah, no, that makes me do it
39:37
that way. Just do it that way.
39:39
Just do that thing. So much of
39:41
the time they're not being malicious. They're
39:43
just like, well, well, well, you've provided
39:45
me, you've provided me, just do it
39:47
this way, or you're from a very
39:49
weak understanding of what you can't just
39:51
do it this way. And I think
39:53
too, that whole, we're gonna trade off
39:55
this complexity of not wanting to talk
39:57
to this other team. That's also an
39:59
information organization problem. Like it's too hard
40:02
to communicate with these people, so I
40:04
don't want to have to. So we
40:06
have to make it easier to communicate
40:08
with them. But the way you get
40:10
to that is by having. better organized
40:12
information because if the way I'm communicating
40:14
with people is by talking to them,
40:16
then that wastes my time and their
40:18
time and now we have to actually
40:20
synchronize with that. But if the way
40:22
I communicate with another team is by
40:24
reading some information they've created or watching
40:26
a video or whatever medium that they
40:29
have provided the information to me in,
40:31
whether it's a lot easier to communicate
40:33
with them and to actually collaborate with
40:35
them. So like I think all of
40:37
these problems stem from our really strong
40:39
desire to just not organize information. Like
40:41
we love generating it. We love having
40:43
the meetings, doing the whiteboarding. We love
40:45
writing up docs, all of that. But
40:47
we just don't actually turn around and
40:49
put it in a place where we
40:51
can find it. Or like, oh, like
40:53
Google search, or we'll do some web
40:55
search, or we'll put it in the
40:58
wiki, and the wiki will have a
41:00
search, and then we'll find it that
41:02
way. It's like, no, like, actually come
41:04
up with information, organization systems, and deploy
41:06
them, and we would solve a lot
41:08
of our inherent complexity. I think the
41:10
funny thing about it too is it's
41:12
not like it's an expensive endeavor to
41:14
do this. I think it's like a
41:16
willingness of us as an industry to
41:18
admit that, you know, we can't solve
41:20
everything with algorithms and some of it
41:22
is we just need to hire some
41:25
humans that are trained in this, we
41:27
just need to hire some humans that
41:29
are trained in this to actually go
41:31
around, collect the information and organize it,
41:33
and then deploy them. And we just
41:35
didn't do that as an industry. And
41:37
for some reason, I still don't understand
41:39
why. We just don't want to do
41:41
it. And instead we wind up spending
41:43
absurd amounts of money getting ourselves in
41:45
these holes and like, I'm sure many
41:47
a company have gone bankrupt because of
41:49
this. Like, many good ideas have floundered
41:51
because of this, or at least have
41:54
been prohibitively more expensive than they should
41:56
be, because we just don't want to
41:58
organize our information. We don't want to
42:00
put it in a spot where you
42:02
can find it. I just want to
42:04
spam the emoji of clapping. This is
42:06
having emogies, but I think what you've
42:08
made me think about is, I think
42:10
there are two types of complexity we're
42:12
talking about here. I think there's the
42:14
kind of... innate link between cognitive understanding
42:16
and complexity, i.e. if we make things
42:18
more easy to understand, we won't feel
42:20
like things are so complex, kind of
42:23
like what you're speaking to Chris. If
42:25
we make that information easy to access,
42:27
easy to understand, and we have processes
42:29
in place to make sure it continues
42:31
to be, because as someone who's done
42:33
documentation sprints, and then five months later
42:35
we haven't done any documentation since. Well,
42:37
you have to continue to do it
42:39
as a practice, otherwise. So I think
42:41
what you're speaking to really resonate in
42:43
terms of trying to make that kind
42:45
of cognitive complexity, that feeling that things
42:47
are complex, go away. But I wonder
42:50
if, and I'm trying to think out
42:52
loud here, if we provided all that,
42:54
whether that would be enough to eliminate
42:56
that negative technical complexity that we've been
42:58
touching on. I don't think it would
43:00
be. I think it would be a
43:02
step in the direction of solving it,
43:04
right? It's not, it's going to take
43:06
a long time because we've accrued a
43:08
lot of debt around this, right? Like
43:10
we've recruited like a tremendous amount, like
43:12
basically we spent the past 20 or
43:14
30 years just lighting all of our
43:16
information on fire and using that to
43:19
propel us forward. And I mean, that's
43:21
how you wind up with all of
43:23
this tech that we have. That is
43:25
just like... absurdly complex like like trying
43:27
to operate Kubernetes is like a monstrous
43:29
behemoth and you have startups of 10
43:31
people being like we're gonna embark on
43:33
this and it's like unwinding that from
43:35
the industry and winding that ethos is
43:37
going to be a very challenging thing
43:39
because it also means that we're going
43:41
to have to sit down and actually
43:43
train people on all of the lower
43:46
level stuff you need to know to
43:48
actually be able to build the things
43:50
that you really should be able to
43:52
build. Like I think a lot of
43:54
it is like how if you do
43:56
want to learn how to design a
43:58
rest system that actually aligns with what
44:00
Roy Fielding wrote about and with how
44:02
the web actually works. like no books
44:04
you can go read that will that
44:06
will give you this information right you've
44:08
got to go scour the web for
44:10
as much as you can like take
44:12
up and then read not just where
44:15
you feel this dissertation but the follow-on
44:17
work about rest there's like a whole
44:19
much of stuff you gotta go pick
44:21
up, there's no easy way to just
44:23
learn this. And then even once you
44:25
do learn it, actually figuring out how
44:27
to deploy it, how to realize it,
44:29
it's also just a huge challenge. And
44:31
there's not a lot of writing about
44:33
how to actually take theoretical things and
44:35
turn them into like actual real deployed
44:37
things. So there's like all this missing
44:39
information that leads to us having missing
44:42
skills. So when you sit down, like,
44:44
I mean, I can say it super,
44:46
say it kind of flippantly if I
44:48
wanted to, I don't want to say
44:50
flippantly, but I could say it's just
44:52
like, yeah, don't use postcards, go, go
44:54
build your own thing. And that's easy
44:56
for me to say. But that's not
44:58
easy to actually do, because as soon
45:00
as you remove post-cres, now you have
45:02
to actually think about your data, and
45:04
you have to think about how do
45:06
you store it, how do you move
45:08
it around? Like, now you have to
45:11
actually be like, oh, I don't have
45:13
linearizability anymore. I don't have these constraints
45:15
that I had before. How does that
45:17
affect the rest of my system? And...
45:19
for a long time we've been able
45:21
to just defer those until like something
45:23
falls over and then we run around
45:25
and we find some other vendor that
45:27
produces something that they claim will solve
45:29
all of our problems and we use
45:31
that thing for a while and then
45:33
it doesn't and we just kind of
45:35
keep keep doing that thing but if
45:38
you if you say from the beginning
45:40
oh we're not going to use postcras
45:42
or we're not going to use Kubernetes
45:44
then you actually have to go build
45:46
something and you have to be the
45:48
one that owns that thing and you
45:50
have to go find out the knowledge
45:52
and understand the thing. Like I think
45:54
a lot of the technologies that give
45:56
us complexity are the the IBM's of
45:58
the modern day, right? No one ever
46:00
got fired for buying an IBM. Like
46:02
no one gets fired for suggesting Kubernetes
46:04
or for saying let's use postgris or
46:07
for any of this stuff. But if
46:09
you say, let's go build my own
46:11
thing and it doesn't work, even if
46:13
it's not your fault that it didn't
46:15
work, even if it wasn't going to
46:17
work, if you use postgras or Kubernetes,
46:19
people are going to point the finger
46:21
at you and be like, oh, well,
46:23
it's your thing that broke. Because that's
46:25
the easiest thing to point the finger
46:27
to. Because that's the easiest thing to
46:29
point the finger to. Because that's the
46:31
easiest thing to point the finger to.
46:33
Because that broke. Because that's the easiest
46:36
thing to point the thing to point
46:38
the thing to. Because that's the thing
46:40
to point the thing to point the
46:42
thing to point the thing to point
46:44
the thing that broke. Because that broke.
46:46
Because that's the thing to point the
46:48
thing to point the thing to point
46:50
the thing to point the thing to
46:52
point the thing to because that broke.
46:54
Because that's the thing to point the
46:56
thing to point the thing to point
46:58
the thing to point the thing to
47:00
point the thing to because that, it's
47:03
the thing to I think there's that
47:05
ethos that has to get unwound, that
47:07
will take a long time, even if,
47:09
like if we could snap our fingers
47:11
tomorrow and have brilliant information organization systems,
47:13
we would still have that complexity because
47:15
that ethos is still there. That problem
47:17
kind of exists on all the different
47:19
levels too. Like we're talking about like
47:21
big pieces databases, right? But imagine I
47:23
have a string and I just want
47:25
to replace all the zeros with O's,
47:27
right? I might grab a regular expression.
47:29
And imagine all that complexity behind compiling
47:32
and running that regular expression versus a
47:34
four loop that goes through and just
47:36
replaces the zeros, right? that pattern is
47:38
something we repeat over and over everywhere
47:40
from the database down to string manipulation
47:42
right yeah when we switch to the
47:44
era of open source we got wonderful
47:46
we got all those open source but
47:48
we also got the oh there's just
47:50
a library can just go use to
47:52
do this thing I don't have to
47:54
go do it myself I don't have
47:56
to understand how this thing works myself
47:59
I don't I don't have to deal
48:01
with that I'm just gonna go grab
48:03
this thing I'm just gonna go grab
48:05
this thing and I think that brings
48:07
in a lot of complexity It does,
48:09
it does. I think you're damned if
48:11
you do, damned if you don't. If
48:13
you, if you choose the technology off
48:15
the shelf, that's safe, then you're not,
48:17
you could... and fall into the traps
48:19
of this technology doesn't fully work with
48:21
me, and I need to be more
48:23
custom, blah, blah, blah. But if you
48:25
build your own custom thing, then like
48:28
you said, when things go wrong, you're
48:30
going to be blamed. And even if
48:32
you did choose the technology off the
48:34
shelf, then people are going to be
48:36
like, and even if you did choose
48:38
the technology off the shelf, then people
48:40
are going to be like, oh, why,
48:42
you can't build your own custom thing.
48:44
But. Speaking of the information organization stuff
48:46
and like information organizing sort of role,
48:48
one, people don't even organize their own
48:50
information, right? Most people nowadays, they just
48:52
dump flat files in Google Drive somewhere
48:55
and they just rely on the search
48:57
feature or same thing with their phones.
48:59
They don't even organize the apps on
49:01
their phone. They just install them all,
49:03
let them just be installed and just
49:05
search for that they need at the
49:07
given time. And it's like, I think
49:09
that's part of the problem. We're not
49:11
even organizing our own information for our
49:13
simple cases. How can we expect to
49:15
organize our own information for the more
49:17
complex cases? And this brings me to
49:19
the question. Do we believe that something
49:21
like AI can be the information organizer?
49:24
Can it be the librarian that we
49:26
all need? Do we believe that? Like
49:28
the problem with AI, right? We all
49:30
like to think about AI as this
49:32
thing that might be able to think
49:34
on its own, but so much of
49:36
like what makes, especially when we talk
49:38
about information organization, what makes a good
49:40
information organization system depends almost entirely on
49:42
the humans that will be both generating
49:44
and consuming the information that is organized.
49:46
And you have to understand all of
49:48
that context and what is best for
49:51
a situation. will change depending on the
49:53
humans involved. So it's like AI, I
49:55
mean, unbundling that term, right? There are
49:57
likely some machine learning algorithms and things
49:59
like that that will. be helpful in
50:01
figuring out how to organize this information.
50:03
But there are things that computers
50:05
are just inherently bad at, and
50:08
that even when they try to
50:10
do pattern recognition, it's not going
50:12
to fit well. For instance, one
50:14
of the things that happened when
50:17
we went from having card catalog
50:19
systems and libraries to having web
50:21
catalog systems is they had to
50:23
basically dumb down the algorithms that
50:26
are used for cataloging. Because in
50:28
card catalogs, you have humans that can
50:30
make decisions about, oh, do these things
50:32
fit together based on another human's reasonable
50:35
understanding, right? Like, if you think, if
50:37
you talk about something like, oh, we
50:39
have a bunch of books on Henry
50:41
VIII, but some of them are. you
50:44
know, Henry, and then T-H-E-E-I-G-T-H, and some
50:46
of them are Henry-8-H, and some of
50:48
them are Henry-I-I, and in a card
50:50
catalog you'd want to have all of
50:53
those things next to each other because
50:55
they're all Henry VIII. But in a
50:57
web search... There's no easily designed web,
51:00
like computer algorithm, that will put all
51:02
of those things next to each other.
51:04
You have to custom code that. And
51:06
you have to custom code that for
51:08
every single context where that sort of
51:11
thing might happen. So it's like, yes,
51:13
we might associate these two things together,
51:15
but it might not necessarily be like
51:17
a logical thing, or it may take
51:19
a lot of extra context. And the
51:22
only way an AI is going to
51:24
figure that. of those things, or if
51:26
you can somehow connect the AI into
51:28
the way that people think and the
51:30
way that people communicate and have
51:33
it be able to be like,
51:35
oh, okay, given the current context
51:37
of this local group of people,
51:40
they're organizing things in this way,
51:42
so I should go organizing things
51:44
in that way. Like, we conceivably,
51:47
like, maybe it could be possible
51:49
to build something that could do
51:51
this. Maybe, I'm not going to say
51:53
never, but whether it would be worth
51:56
it or cost effective is just an
51:58
absolute no. And whether like if. we
52:00
don't have any AI technology that would
52:02
even be able to approach doing this
52:04
at the moment. So it's like AI
52:06
is not the solution here, but also
52:08
like just economically AI is a silly
52:10
solution to this problem, right? Like the
52:13
solution is problem is like A, like
52:15
get people some, you know, in schooling,
52:17
start teaching people about information organization. Like
52:19
we used to like. before we had
52:21
computers ubiquitous you had to learn it
52:23
right if you had to go write
52:25
a research paper what you have to
52:28
do you had to go to the
52:30
library you had to talk to the
52:32
reference library and then they would help
52:34
you find the information that you needed
52:36
by searching the card catalog learning the
52:38
call system learning where things where the
52:40
you know, subjects you might need to
52:42
look for are in the library, you
52:45
go, go in the stack, start picking
52:47
up books, looking through things, right? You
52:49
learned, okay, if I'm looking for this
52:51
subject, I can go to this part
52:53
of the stacks and pull books. And
52:55
if there's a book on this subject,
52:57
it'll be in this part of the
53:00
stack, right? And so, like, that information,
53:02
like, that's not a thing that gets
53:04
taught to people anymore. barely learn that
53:06
in college, but the more that we
53:08
get digitized, the more that people don't
53:10
interact with like referenced life variants as
53:12
much. And since we don't have them
53:14
in our companies because every tech company
53:17
refuses to hire librarians for some reason,
53:19
you don't get exposed to them in
53:21
tech companies either. So I think if
53:23
we just exposed people to the information
53:25
science folks, they would learn how to
53:27
not just Find organized information, but you'd
53:29
kind of get through osmosis how to
53:32
organize your own information as well. I
53:34
do think it's a problem of people
53:36
are not exposed to any of this
53:38
throughout most of their lives, because we
53:40
just have stripped it away. Right now
53:42
a lot of people spend time in
53:44
libraries actually doing research anymore. Well, it's
53:46
worse. It's not even that's tripped away
53:49
from us. It's that it's... The example
53:51
you mentioned about going to the library
53:53
and looking up, it's intentional, right? You're
53:55
intentionally seeking out information that's relevant to
53:57
what you're looking for. Whereas, and you
53:59
can argue and say, hey Matt, no,
54:01
I could just Google search something, right,
54:03
or I could search something, and it
54:06
worked. But you're not intentionally searching anything.
54:08
really. You're just letting Google tell you
54:10
what to click on. You know, yeah,
54:12
sure, you have the option of clicking
54:14
on the link or not, but odds
54:16
are you're not going to look through
54:18
the links or look through the results
54:21
and find ones that are relevant, like
54:23
kind of cherry-pick them a little bit.
54:25
You're going to choose the top three
54:27
links if you're going to cherry-pick them
54:29
a little bit. You're going to choose
54:31
the top three links if they're going
54:33
to cherry-pick them a little bit. You're
54:35
going to like, you're like, you're like,
54:38
you're like, you're like, you're like, you're
54:40
like, you're like, you're like, you're like,
54:42
you're like, you're like, you're like, you're
54:44
like, you're like, you're like, you're like,
54:46
you're like, you're like, you're like, you're
54:48
like, you're like, you're like, you're like,
54:50
you're like, you're like, you're like, you're
54:53
Which one am I going to dive
54:55
into about this? Nowadays we don't even
54:57
ask ourselves that question. We just read
54:59
a title of a headline somewhere and
55:01
we know everything we need to know.
55:03
I think the concept of search being
55:05
not something you have to type into
55:07
like a bar, like a text box.
55:10
is foreign to most people. Right, if
55:12
you talk about like even simple things
55:14
like wiki's, right? A good wiki, if
55:16
it's well-designed and well-laid out, you don't
55:18
need that search bar, right? Because, oh,
55:20
it's like you have a good call
55:22
system. So you're like, oh, if this
55:25
team has written documentation about this system,
55:27
I know that it will be in
55:29
this place. I don't need to go
55:31
type what I think they might have
55:33
titled it. where that information will be.
55:35
If that information is not there, then
55:37
they hadn't written it and I can
55:39
go talk to them and maybe we
55:42
can generate it and we can go
55:44
from there. It's not me being like,
55:46
oh, I didn't craft the right magical
55:48
incantation into this box to spit back
55:50
out at me the things because, you
55:52
know, if... If search is you writing
55:54
incantations into a box, he was to
55:57
get really good at guessing what other
55:59
people have conceptualized as the words to
56:01
use to convey this information. Whereas if
56:03
you predesigned a system that says if
56:05
this information is available, it will be
56:07
here, there's no, there's a lot less
56:09
ambiguity. You would just go and you
56:11
look at the spot and if it's
56:14
there, good, if it's not, you can
56:16
go talk to somebody. Do you not
56:18
think though that Amel could really help
56:20
with... the initial like generation though. I
56:22
know that's slightly different than what we're
56:24
talking about but I for example like
56:26
I have done this my team has
56:29
done this where they put a load
56:31
of code into yes a text box
56:33
or they provide the code file but
56:35
then it says write a summary of
56:37
what this does and more often than
56:39
not it's pretty close to accurate slash
56:41
like over time I'm sure it will
56:43
get better because I think the real
56:46
excitement in my mind when it comes
56:48
to information organization and I actively use
56:50
this in my job is you have
56:52
a load of information or a load
56:54
of historical documentation code whatever it may
56:56
be I've done it with Excel sheets
56:58
where I will say summarize this or
57:00
tell me what the pattern is in
57:03
this, or tell me how many times
57:05
X kind of pattern is used. And
57:07
it's been for my use cases, of
57:09
course, depending on the complexity, it might
57:11
be better or worse, also depending on
57:13
which level I'm using. But I found
57:15
that hugely useful and really, really helpful
57:18
for summarizing and providing information that is
57:20
all comprehensible and means that I don't
57:22
have to go and read like 50
57:24
documents. Obviously you have to build confidence
57:26
and the fact is going to give
57:28
you an accurate answer. But that's what
57:30
gives me hope that there is, I
57:32
agree not now, like I still need
57:35
to go through and spot check. I
57:37
don't have absolute confidence in these tallings
57:39
yet, but I do hope to get
57:41
to a place where I would be
57:43
able to do that with a hundred
57:45
percent conference that it will be accurate
57:47
enough for my purposes. I'm interested to
57:50
hear what the what the group is.
57:52
I also know Ian when Chris originally...
57:54
talked about this. I saw you, I
57:56
saw you go, oh, I don't, I
57:58
think that that's actually could be really
58:00
helpful. I'd be interested to hear. Matt
58:02
talked about how everyone is just storing
58:04
everything as like flat files in a
58:07
Google Drive, right? Which I do think
58:09
is a problem, it's discoverability is a
58:11
problem, but I do think, maybe not
58:13
yet, but at some point AI will
58:15
be able to go and make like
58:17
a clean wiki that you can navigate
58:19
like Chris mentioned by just looking at
58:22
those flat files. right like as a
58:24
presentation layer on top of our data
58:26
right and maybe not just like a
58:28
search for our chat like that but
58:30
actually building out and keeping documents up
58:32
to date I really believe that's something
58:34
that like large language models will be
58:36
able to do assuming they don't plateau
58:39
I think maybe part of the reason
58:41
why I don't I don't believe that
58:43
as much is likely rooted in like
58:45
the reason why like a bunch of
58:47
flat files isn't that helpful is because
58:49
they haven't been put under you know
58:51
what is called bibliographic control. And like
58:54
what bibliomorphic control means is that you've
58:56
taken metadata and then you've specifically designed
58:58
that metadata so it can be catalogued
59:00
which allows you to search it. And
59:02
well AI is like great at things
59:04
that are well understood and well known
59:06
at the moment. It's not as good
59:08
at necessarily understanding context for things that
59:11
are new, or for like newly generated
59:13
information, or ways that language changes, right?
59:15
So it has to keep up with
59:17
what current language is. But I also
59:19
think that metadata generation is not actually
59:21
that difficult of a thing to do.
59:23
It's just something that you have to
59:26
be trained on how to do it,
59:28
right? You're like, okay, well, this is,
59:30
and once again, I think. If we
59:32
boil it down, if we reduce from
59:34
AI to like just regular algorithms and
59:36
maybe like some ML stuff, I think.
59:38
Absolutely, right. I think that if you
59:40
have a bunch of files, but they're
59:43
all in a structured format where like
59:45
here's the metadata, right? I've gone in
59:47
and I've written out like here's the
59:49
title, here's some keywords, here's blah blah
59:51
blah, like you, you've structured it. I
59:53
think that the system can go through
59:55
and just organize it for you, right?
59:57
I think that that's the part it
1:00:00
can do well, but that initial step
1:00:02
of generating that metadata, needs to be
1:00:04
something that a human does because it's
1:00:06
like a human that's going to be
1:00:08
using and consuming it at the end
1:00:10
of the day. But the way a
1:00:12
machine might organize something is not necessarily
1:00:15
the most intuitive way that an individual
1:00:17
would. This is especially true of like
1:00:19
local, like for you or for a
1:00:21
small group of people. So I think
1:00:23
that there is this place for technology,
1:00:25
especially computing, to help us here. I
1:00:27
just don't think it's AI. I think
1:00:29
AI is just, I think it's a
1:00:32
lower level thing mixed with understanding of
1:00:34
information, organization, theory, and science. I have
1:00:36
a follow-up question. If we get really
1:00:38
good at explaining what we want to
1:00:40
do, do you think that we could
1:00:42
solve the writing of code? Because I
1:00:44
think if you can really really accurately
1:00:47
articulate, not now, but in the future
1:00:49
in some future state, exactly what you
1:00:51
need, could we take away a lot
1:00:53
of the complexity that feels through a
1:00:55
conversation somewhat rooted in human decision making?
1:00:57
Am I going to do it this
1:00:59
way? Am I not? Am I going
1:01:01
to make this trade-off? This other trade-off?
1:01:04
If we actually focus... Is that now
1:01:06
what code is? Is code not just
1:01:08
very precisely explaining what you want something
1:01:10
to do? I think a lot of
1:01:12
the thing, a lot of what I'm
1:01:14
hearing is a lot of the complexity
1:01:16
that is introduced is by people who
1:01:19
perhaps don't understand fully what is happening
1:01:21
in the code, in the technology at
1:01:23
the base layer. If we therefore take
1:01:25
that away and take away the need
1:01:27
to do that and get very good
1:01:29
at making sure that not human, but...
1:01:31
you know, and some algorithm, some, and
1:01:33
I'll understand that logic and you take
1:01:36
the time, you absolutely need to take
1:01:38
the time to teach it, but if
1:01:40
you get it to a good enough
1:01:42
level, could there be certain complexity that
1:01:44
is eliminated by doing this? Which I
1:01:46
do acknowledge is leaning back into the
1:01:48
first problem we talked about, which is
1:01:51
around using tools that we think work
1:01:53
and might not, but I want to
1:01:55
put it out there. I'm just speaking
1:01:57
aloud. I do think it might be
1:01:59
able to give us more objective device.
1:02:01
advice like on what things we shouldn't
1:02:03
shouldn't be using maybe not though I
1:02:05
don't know I think it's gonna always
1:02:08
miss the why context like that's that's
1:02:10
the it's gonna miss the why context
1:02:12
Like if you tell something what to
1:02:14
do or how to do it, that's
1:02:16
great, but you still need to know
1:02:18
the why. And I think that's where
1:02:20
we're getting at with the metadata aspect
1:02:23
of the metadata gives you more of
1:02:25
that context surrounding why we were even
1:02:27
here, why we're even doing this thing
1:02:29
in the first place, or why does
1:02:31
it matter in relation to other things?
1:02:33
And I think if we had some
1:02:35
future where you can just tell the
1:02:37
AI exactly what we want to do,
1:02:40
great, it's going to do it. I'm
1:02:42
trying to think of a good example,
1:02:44
but in terms of like physical, in
1:02:46
the physical world, of like building a
1:02:48
house or something or building or doing
1:02:50
renovations, if someone looks at you and
1:02:52
says like, oh, I want you to
1:02:54
build me a house strictly out of
1:02:57
wood with nothing else, but you're in
1:02:59
the middle of like, you know, some
1:03:01
place that has no wood, you might
1:03:03
question yourself like why, why, why are
1:03:05
we doing this? is important in my
1:03:07
opinion. Like, and if AI can't understand
1:03:09
the why, can we? Like, could you
1:03:12
not, like, your example, like, build me
1:03:14
a house completely out of word that
1:03:16
is in Siberia, in forest fire, in
1:03:18
a forest fire susceptible area? Like, surely
1:03:20
if we've trained it, whatever we're putting
1:03:22
this into in the right way, it
1:03:24
would come back and say. I could
1:03:26
do it, but I would recommend you
1:03:29
don't. The reason, and this is why.
1:03:31
Because I do that, I go in
1:03:33
and maybe I'm outing myself here and
1:03:35
I go like, write me an email
1:03:37
to someone who I'm really upset at
1:03:39
because they said no to me being
1:03:41
able to work on this project I
1:03:44
really want to work on. However, they're
1:03:46
the head of this department and this
1:03:48
is their general sentiment. So write the
1:03:50
email with that in mind, blah blah
1:03:52
blah blah blah. And then it comes
1:03:54
back even now and says. Oh, here's
1:03:56
the email. This is why I changed
1:03:58
this word to this. already we're seeing
1:04:01
some reasoning. Yeah, don't call them a
1:04:03
jerk, yes. No, that makes sense. Like
1:04:05
I would advise that you don't use
1:04:07
the word X, Y, Z, because as
1:04:09
you've let me know, you want this
1:04:11
to be polite and this is not
1:04:13
a polite word. I would recommend you
1:04:16
say this instead. I think to Chris's
1:04:18
point earlier, like some human is still
1:04:20
doing, you there are the human doing
1:04:22
the information organizing, right? You are up
1:04:24
front organizing the information in a way
1:04:26
where it's helpful. to whatever is going
1:04:28
to consume it and retrieve it. You're
1:04:30
giving the AI all that context up
1:04:33
front and you're saying, hey, do this.
1:04:35
So if AI can do that reliably,
1:04:37
sure, I think it could be helpful.
1:04:39
Are we there today? Probably not. Maybe
1:04:41
in the contrived examples that we're using,
1:04:43
sure. But when we're talking about complex
1:04:45
systems that relate to one another, I
1:04:48
think that's where it's going to fall
1:04:50
short, at least today, in today, in
1:04:52
today's AI. And mind you, I'm sorry
1:04:54
I didn't have a better example. Like,
1:04:56
I don't know, I'm struggling to, I've
1:04:58
been up for a long time. Okay,
1:05:00
my brain is... I've been convinced. I
1:05:02
kind of agree with you, I don't
1:05:05
know. Just like going back to the
1:05:07
complexity that we get by using, like,
1:05:09
thing related things, like they built, a
1:05:11
lot of the content is around that,
1:05:13
right? Like, it would, it would require,
1:05:15
like... Because AI is a reflection of
1:05:17
humans, right? So we have to get
1:05:20
it right before the AI can get
1:05:22
it right. Yeah. AI is reflective of
1:05:24
its training data. And we'll always do
1:05:26
the average thing. So I mean, that's
1:05:28
something I like to say a lot
1:05:30
of like, I don't know, yeah, you
1:05:32
can use an AI coding system if
1:05:34
you want the most average code, but
1:05:37
I thought we were all here trying
1:05:39
to get the best code. But you
1:05:41
want to go write and have average
1:05:43
code, you know, mediocre code, more power
1:05:45
to you, more power to you. a
1:05:47
trap we've all fallen into around what
1:05:49
computers are good at, what humans are
1:05:51
good at, right? There's like the two
1:05:54
types of problems, right, or the two
1:05:56
types of learning. There's the easy learning
1:05:58
and the hard learning. the easy learning
1:06:00
is to stop that. AI systems and
1:06:02
computers are really good at, because those
1:06:04
are the things that are very repetitive.
1:06:06
You do it over and over and
1:06:09
over and over, you can learn from
1:06:11
it. So games like chess, or really
1:06:13
any game, is a good example of
1:06:15
this. You can play it many, many,
1:06:17
many, many different times, and you can
1:06:19
learn from each time you play, how
1:06:21
to get better. And then there are
1:06:23
hard problems, which you don't get
1:06:26
to do many times, and therefore there's not
1:06:28
a lot of data about them. there's probably
1:06:30
not a lot of information about this. So
1:06:32
anyway, it's probably not going to give you
1:06:35
good advice about how to build a skyscraper
1:06:37
in Manhattan, but I might have great advice
1:06:39
about how to build a single family home
1:06:41
because we built so many of them. People
1:06:43
have written a lot of content about them.
1:06:46
So for things that there's a lot of
1:06:48
information about or for things that are those
1:06:50
easy learning problems, the AI systems will and
1:06:52
computers are great at those things. But I think
1:06:54
there's a second part of this as well, which
1:06:57
is. Humans are really good at
1:06:59
generating things. We're really good at
1:07:01
creating and, you know, for
1:07:03
instance, we're really good at
1:07:05
doing that metadata generation. Like we
1:07:07
can read something and say, oh,
1:07:10
here's some interesting words and, oh,
1:07:12
this thing is written in a
1:07:14
way, oh, this thing is written
1:07:16
in a way where I think this
1:07:18
thing is written in a way
1:07:21
where I think this, this keyword
1:07:23
also fits here, like getting into
1:07:25
all that subtlety, right? machine checks
1:07:27
the work. Like that's the good
1:07:29
workflow. And what AI and a
1:07:31
lot of the AI hype tries
1:07:34
to get us to do is
1:07:36
reverse those two things. And I
1:07:38
think when you reverse those two
1:07:40
things, now you're having humans try
1:07:42
and check the AI. We're terrible
1:07:44
at checking things. That's not what
1:07:47
we're good at. We are not
1:07:49
good at that type of thing.
1:07:51
We are not good at that
1:07:53
type of thing. We are not
1:07:55
diligent enough. We get bored. We
1:07:58
get distracted. We get distracted. You
1:08:00
do a little too much creativity, you wind
1:08:02
up hallucinating, so you want to get close
1:08:04
to that, but the best type of creativity
1:08:07
is the thing that sits like right on
1:08:09
that line between like insanity and... Yeah, between
1:08:11
being insane and insane, right? You got to
1:08:13
sit right on that line. And like, the
1:08:16
AI has no idea how to sit on
1:08:18
that line properly, so it'll just... barfout stuff
1:08:20
at you kind of like you know I
1:08:22
mentioned in the other episode how I got
1:08:25
the tech command because it's the inverse of
1:08:27
cat and I'm like you know that works
1:08:29
for some things but not for this
1:08:31
thing very creative but very wrong
1:08:34
in this circumstance a for everyday
1:08:36
a for effort but I think I
1:08:38
for an idiot no I'm just kidding but
1:08:40
I think this is this is one of
1:08:42
the things about information organization that
1:08:44
would help us because I think there
1:08:46
is like a lot of writing about
1:08:48
these ideas and there's a lot of
1:08:50
like people that so many people have
1:08:52
thought about this and have written things
1:08:54
about this but you have to go
1:08:56
search to find you know books and
1:08:58
references and talks and videos on this
1:09:00
stuff and our algorithm systems kind of
1:09:02
just do what everybody else thinks is
1:09:05
good, it kind of recommends popular
1:09:07
things at you. So it's the
1:09:09
same problem that we've had with,
1:09:12
you know, all of the other
1:09:14
things we've been talking about. It's
1:09:16
like, yes, you just get recommended
1:09:19
stuff that reinforces what you've already
1:09:21
learned about, which is saying that, like,
1:09:23
oh, no, AIs, you can do all these
1:09:25
things, or AI in the future, we'll be
1:09:28
able to do all these amazing things, not
1:09:30
that we should. I mean, like, Is this
1:09:32
something that we even need to, is this
1:09:34
a hard thing to do? Is this a
1:09:36
thing worth automating? I think is the big
1:09:38
question at the end of the day. I
1:09:41
think what it comes to information organization and
1:09:43
generating that metadata and all of this, like,
1:09:45
no, like it's valuable, like in your case
1:09:47
Angelica, where you said, oh, we have all
1:09:49
this old code, all this historical information,
1:09:52
it's super valuable to maybe hire an
1:09:54
archivist and have engineers sit down and
1:09:56
just read and process through all of
1:09:58
that data because their knowledge, did by
1:10:00
that, and then you'll have this really
1:10:02
high quality set of information that you
1:10:05
could then feed into like a machine
1:10:07
learning thing to extract things out eventually
1:10:09
or write some algorithms that help you
1:10:12
out with things instead of just throwing
1:10:14
the information at the algorithm to begin
1:10:16
with or throwing at a general algorithm
1:10:18
to begin with. So I think like
1:10:21
optimizing more for those early stages is
1:10:23
a good idea and having humans do
1:10:25
it. I think this whole conversation about
1:10:27
AI reducing complexity. is like indicative of
1:10:30
why we have so much complexity. It's
1:10:32
like, let's do this really complex thing
1:10:34
to make this easier. Yeah. And it'll
1:10:36
just make it more complex. I mean,
1:10:39
I think the AI hype cycle that
1:10:41
we're in is the indicative of how
1:10:43
complex things have really gotten for us.
1:10:46
Because when you actually do the math
1:10:48
and run AI systems, right? LLLMs, they
1:10:50
cost what, like. 200, 400, 600 million
1:10:52
dollars to train one, like exorbitantly expensive
1:10:55
to train one of these things. And
1:10:57
that's like, hey, how are you ever
1:10:59
going to recoup those costs? But it's
1:11:01
like, why are we spending hundreds of
1:11:04
millions of dollars to build these things
1:11:06
and we don't want to turn around
1:11:08
and hire like a librarian for 75K
1:11:11
a year to help us organize our
1:11:13
information? Like, why is it that we
1:11:15
are so willing to go? dump that
1:11:17
much money into that thing instead of
1:11:20
doing the thing that that works like
1:11:22
why are we so obsessed with the
1:11:24
shiny and I think that's another reason
1:11:26
why we have so much complexity is
1:11:29
that we like the shiny. We don't
1:11:31
want the thing. We don't want the
1:11:33
boring thing. That's the simple answer. That's
1:11:36
the, oh just go too. Like when
1:11:38
I say information organization, like yeah this
1:11:40
is a whole industry. People get PhDs
1:11:42
in this. You can go to school
1:11:45
to become a librarian or an information
1:11:47
scientist or an archivist. And then you
1:11:49
can go work and like we know
1:11:51
how to do it. You go in,
1:11:54
you do the work, you catalog the
1:11:56
information and boom. You have all this
1:11:58
nice organized information, you can find everything.
1:12:01
for organizing for them and how much
1:12:03
value they're getting out of it, to
1:12:05
know that they should do it themselves
1:12:07
as well, especially companies. You know, like
1:12:10
encyclopedias, phone books, things that traditionally were
1:12:12
structured information. I mean, heck, even observability
1:12:14
platforms pushed for structured information. What else
1:12:16
is structured information except for organizing information?
1:12:19
And it brings me into something that
1:12:21
I want to talk about about complexity
1:12:23
is part of why I think. complexity
1:12:26
is going up until the right in
1:12:28
this industry is that we have sort
1:12:30
of mixed and unclear terminology of how
1:12:32
we talk about anything in this field.
1:12:35
There's seven different words to describe the
1:12:37
same thing. No two parties are using
1:12:39
the same vocabulary. Things are not clear
1:12:41
or words are even used that don't
1:12:44
mean what it should mean and we're
1:12:46
just very unclear with our terminology in
1:12:48
this industry and I think that leads
1:12:51
to a lot of complexity. And to
1:12:53
give a quick example of Kubernetes, it's
1:12:55
like, if you're a new person learning
1:12:57
Kubernetes, you are just thrown into the
1:13:00
world of new terminology, right, pod, staple
1:13:02
set, dayman set, operator, controller, oh, they're
1:13:04
the same thing, no, they're not really,
1:13:06
but they are the same thing actually
1:13:09
under the hood. And it's like, why
1:13:11
did we do this to ourselves? We
1:13:13
don't need a new word for every
1:13:15
little thing. It's okay to just to
1:13:18
use plain language. give it a name.
1:13:20
We have to give it a word
1:13:22
that's so cool, but it's not descriptive.
1:13:25
And it's like you're just hurting people.
1:13:27
Like that's all you're doing is hurting
1:13:29
people. That's another thing that seems simple
1:13:31
at the start and adds complexity. Like
1:13:34
it seems simpler to call it something,
1:13:36
right? Yeah. But. I mean, most of
1:13:38
the time people don't want to sit
1:13:40
down and pull up an etymology reference
1:13:43
and be like. Okay, what is this
1:13:45
thing? How do I relate to words
1:13:47
that already exist? How do I how
1:13:50
do I break down and understand what
1:13:52
words mean? People are like, oh, stop
1:13:54
a name on it. We'll figure this
1:13:56
out later. Like we love doing that
1:13:59
as an industry. I mean, this is
1:14:01
like my whole thing about like. Asynchronous
1:14:03
doesn't mean anything anymore and like synchronous
1:14:05
and asynchronous and concurrent like these these
1:14:08
words at Amologically Do not map correctly
1:14:10
onto what we have and we've just
1:14:12
made a giant mess out of just
1:14:15
kind of Deciding that we're gonna throw
1:14:17
them around in whatever way because we're
1:14:19
not very precise with our with our
1:14:21
communication because well quite frankly Most people
1:14:24
that learn how to code or most
1:14:26
people that like get CS degrees or
1:14:28
whatever, they don't take many English courses.
1:14:30
So they don't really learn about, you
1:14:33
know, how does etymology work or how
1:14:35
do you go find the meaning of
1:14:37
words or how do you create and
1:14:40
invent new words. And then we're stuck
1:14:42
with cloud and devops. So I do
1:14:44
want to kind of bring this back
1:14:46
in. We had a bit of an
1:14:49
AI tangent related enough. But I'm hoping
1:14:51
we can kind of formulate. a concise
1:14:53
hypothesis of where complexity comes from. And
1:14:55
does anyone want to take a stab
1:14:58
at it? Like if you had to
1:15:00
do it in one sentence. My one
1:15:02
description would be for my one sentence
1:15:05
to describe like where complexity comes from.
1:15:07
Complexity comes from users not describing accurately
1:15:09
what it is they're doing with the
1:15:11
tools they're working with. That's where complexity
1:15:14
comes from. If I have to put
1:15:16
one sentence on it. Anyone else? I
1:15:19
mean I feel like I
1:15:21
said like the root like
1:15:23
I would say that but
1:15:25
just one sentence complexity comes
1:15:27
from a lack of information
1:15:30
organization like that like I
1:15:32
feel like over the course
1:15:34
of this episode that has
1:15:36
becoming very reinforced in my
1:15:38
mind of like all of
1:15:40
the problems that we have
1:15:43
talked about could be maybe
1:15:45
not solved completely but definitely
1:15:47
mitigated if we had better
1:15:49
information organization across the board
1:15:51
like if we had the
1:15:53
ability to Communicate in a
1:15:56
way that would allow us
1:15:58
to you know build on
1:16:00
previous communication, I think we
1:16:02
would be in a much,
1:16:04
much better place than we
1:16:06
are. You got one, Angelica?
1:16:09
I'm trying to think. I'll
1:16:11
give you the beginning of
1:16:13
my sentence, and in the
1:16:15
end I'm struggling with. So
1:16:17
complexity is caused by the
1:16:19
lack of understanding, time, and
1:16:22
the need for the new,
1:16:24
but... I
1:16:26
don't think the last part is
1:16:28
fully accurate because I think we've
1:16:30
talked both about the root of
1:16:32
complexity being a lack of exploration
1:16:35
of the different options, but also
1:16:37
the exploration of the different options
1:16:39
and not being happy with what's
1:16:41
already available. So that ending part
1:16:43
I think is a little bit
1:16:45
of dichotomy because I think it's
1:16:48
actually both. So it's a lack
1:16:50
of time understanding and both the
1:16:52
need for the new and... the
1:16:54
unwillingness to explore the new. It's
1:16:56
like a seesaw. That's what I
1:16:58
picture when you say this, yeah.
1:17:01
I think complexity comes from attempting
1:17:03
to simplify with tools that just
1:17:05
move the complexity and add more
1:17:07
behind a facade of simplicity. Yeah.
1:17:09
Yes. Complexity is Kubernetes. Complexity is
1:17:11
Kubernetes. That's it. This is something
1:17:14
I want to, I want to
1:17:16
explore further, but I think this
1:17:18
was a really good, like, initial
1:17:20
exploration. It gives us a lot
1:17:22
to think about, and yeah, stay
1:17:25
tuned for, for more about complexity,
1:17:27
because it's definitely something I am
1:17:29
very interested in exploring further. It's
1:17:31
very complex issue. It really is.
1:17:33
I'd love to hear that our
1:17:35
listeners think about complexity too. Yeah,
1:17:38
listener, where do you think? complexity
1:17:40
comes from. Yeah. Comment down below
1:17:42
please. Send us a letter by
1:17:44
mail. Technically you could but please
1:17:46
don't. I mean no like technically
1:17:48
there is a PO box you
1:17:51
could send a letter to if
1:17:53
you want to but you know
1:17:55
I don't know why I said
1:17:57
that because somebody's gonna do it
1:17:59
you know I'm sorry I shouldn't
1:18:01
have not said that at all.
1:18:04
If you I mean hey if
1:18:06
you can figure out what that
1:18:08
PO box is I mean it's
1:18:10
it's not that hard. It's in
1:18:12
a... It's in a... If you
1:18:14
look in... If you look at
1:18:17
enough things on our website and
1:18:19
trace enough trails you will find
1:18:21
a PO box. I mean like
1:18:23
legally I have to have a
1:18:25
PO box so it's not like
1:18:27
a thing that's not supposed... It's
1:18:30
public. It's public. You just got
1:18:32
to go find it. Should we
1:18:34
wrap things up? Does anyone to
1:18:36
do it? Because now I don't
1:18:38
know what you did. Now I
1:18:41
want a letter. I want a
1:18:43
letter now. Should we wrap things
1:18:45
up and do unpopular opinions? Unpopular
1:18:47
opinions. Yes. That was it. That
1:18:49
was yes. You can't say that
1:18:51
without the old jiggle going through
1:18:54
my head. 100%? To my brains.
1:18:56
Yeah, I mean, who has one?
1:18:58
I have one. I wrote one
1:19:00
down. Okay. Go for it. Okay,
1:19:02
cool. My unpopular opinion is that
1:19:04
LinkedIn has the best like experience
1:19:07
out of all social media platforms
1:19:09
ever. Bar none. Don't even want
1:19:11
rebuttals. None of that. But what
1:19:13
do you mean? The like button?
1:19:15
Yes. The best use of the
1:19:17
like button, the best meaning of
1:19:20
the like button. In fact, it's
1:19:22
the only platform who I even
1:19:24
interact with the like button. There's
1:19:26
no, I don't even like things
1:19:28
on X or Blue Sky or
1:19:30
anything, you can go check my
1:19:33
profile, but LinkedIn, I use the
1:19:35
like button. But why, why, okay,
1:19:37
yeah, why is, it semantic, it's
1:19:39
meaningful, it can be a like,
1:19:41
it can be a celebrate, it
1:19:44
can be a support or a
1:19:46
love, it means. and multiple things
1:19:48
rather than just like. Because like
1:19:50
when you use like on other
1:19:52
social media platforms, what do you
1:19:54
mean? Do you like what I
1:19:57
said? Do you agree with it?
1:19:59
Do you just like that post
1:20:01
as a bookmark for later? Are
1:20:03
you just trying to have engagement?
1:20:05
It means nothing. So LinkedIn has
1:20:07
the best experience for that. Yeah,
1:20:10
I do remember back when Twitter
1:20:12
had public likes. Like I would
1:20:14
like something and be like, I
1:20:16
don't agree with that, but it
1:20:18
was like witty and funny. Is
1:20:20
this saying I agree with it?
1:20:23
Yeah. Not overtly outraged by that
1:20:25
statement, but nor do I feel
1:20:27
like I fully agree. I kind
1:20:29
of need to sit with it
1:20:31
for a bit, because my honest
1:20:33
answer is, not to discredit your
1:20:36
unpopular opinion, but like, when I
1:20:38
like a picture on Facebook or
1:20:40
meta, I think that means I
1:20:42
liked it. Maybe you're not wrong.
1:20:44
Maybe I haven't thought about the
1:20:47
different connotations of my like. I
1:20:49
feel like maybe I just need
1:20:51
to think more deeply because what
1:20:53
I like on any of these
1:20:55
platforms it means I'm like, yes,
1:20:57
I liked it. Like is a
1:21:00
very complex topic. I now need
1:21:02
to look into the etymology of
1:21:04
the word like. The one thing
1:21:06
this is not my opinion one
1:21:08
thing I don't like and I
1:21:10
would be intrigued now to hear
1:21:13
what you think Matt is I
1:21:15
don't like that LinkedIn gives you
1:21:17
these like pre-baked auto replies because
1:21:19
I can tell if I get
1:21:21
well done on Jelica and one
1:21:23
exclamation mark you didn't write that
1:21:26
you're lazy you just press the
1:21:28
freaking button to auto reply oh
1:21:30
yeah I agree with that that's
1:21:32
those are replies or garbage You're
1:21:34
going to talk about on like
1:21:36
on like Facebook too where you
1:21:39
just get the happy birthday one
1:21:41
exclamation mark like what is it
1:21:43
the little party popper emoji I'm
1:21:45
like you don't write that yeah
1:21:47
I agree on notification you had
1:21:49
no idea it was my birthday
1:21:52
you might not even know who
1:21:54
I am, but you're like, what
1:21:56
is it? See, but if you
1:21:58
posted that you got a new
1:22:00
job or something or some sort
1:22:03
of thing, I would use the
1:22:05
celebrate version of the like, and
1:22:07
I would celebrate your success. Because
1:22:09
it's meaningful. That being said, I
1:22:11
posted on LinkedIn, I actually removed
1:22:13
it because that was so many
1:22:16
weird comments on it, but I
1:22:18
posted on, one of my friends
1:22:20
launched a new podcast, it's completely
1:22:22
unrelated to this, so it's not
1:22:24
cross-pollination, I promise, it's completely unrelated
1:22:26
topic, and I will not even
1:22:29
mention what the podcast is. But
1:22:31
anyway, but I posted about the
1:22:33
first episode, and I got, congratulations
1:22:35
on your new job. Congratulations on
1:22:37
the new opportunity, I was like,
1:22:39
no, did you should read the
1:22:42
post! So now you
1:22:44
see why I removed it because
1:22:46
I was like clearly no you
1:22:48
didn't read this. That sums up
1:22:50
social media so well in general.
1:22:52
Yeah. Let me just show I
1:22:54
looked at it without actually reading
1:22:56
it or understanding what it says
1:22:59
or looks like or the connotations
1:23:01
of my like as you said
1:23:03
Ian like did I just inadvertently
1:23:05
agree with this without knowing? All
1:23:07
right I have an unpopular opinion.
1:23:09
Okay. I think read receipts on
1:23:11
texts and emails are the devil.
1:23:13
There's the worst thing that we've
1:23:16
ever invented. I don't want anyone
1:23:18
to know- You're ignoring people for
1:23:20
long periods of time. Don't you
1:23:22
have read receipts on for your
1:23:24
text? No. I immediately turn those
1:23:26
off. I just don't want to
1:23:28
know. I also don't want to
1:23:30
see that someone has read my
1:23:32
thing. Like, if they haven't replied,
1:23:35
I want to be like, you
1:23:37
know, like they haven't seen it
1:23:39
yet? Or they're busy, whatever. I
1:23:41
don't want to like stew in
1:23:43
that like, oh they saw it
1:23:45
and never replied. Like, I just
1:23:47
don't think it's helpful. That's like
1:23:49
presence notifications on different social media
1:23:52
sites. It's like, I don't know
1:23:54
you're online. That means you might
1:23:56
not have read my thing. Like,
1:23:58
no. But, but to your point,
1:24:00
Andrew, like. I am also habitually
1:24:02
bad at replying to people. So
1:24:04
this might be a, that might
1:24:06
be a biased opinion. Do you
1:24:09
think the world would be better
1:24:11
if you've never knew if someone
1:24:13
received your message? You don't get
1:24:15
it for letters and we use
1:24:17
those for forever, you know But
1:24:19
if I send someone a message
1:24:21
like a letter and I perpetually
1:24:23
wonder whether they received my letter
1:24:26
I might just start sending a
1:24:28
letter every day until and then
1:24:30
they'll have like 50 letters And
1:24:32
I didn't actually just were a
1:24:34
really slow writer I'm just laughing
1:24:36
at Chris is very certified now
1:24:38
like a very good unpopular opinion
1:24:40
because I think you'll divide the
1:24:43
crowd on that one. Yeah. Because
1:24:45
I love knowing when people have
1:24:47
read my messages but I also
1:24:49
hate it depending on the message.
1:24:51
I would like for Unread to
1:24:53
make it unread though. Like if
1:24:55
I select on red I wanted
1:24:57
to like move the read receipt
1:25:00
back to unread. Oh but I've
1:25:02
used that I would do that
1:25:04
in live though where I've seen
1:25:06
them read it and I see
1:25:08
them sneakily change it back to
1:25:10
red. And I'm like I saw
1:25:12
that I saw that. Because I
1:25:14
do that I do that to
1:25:17
manage my to manage my notifications
1:25:19
stuff. I leave things unread correctly
1:25:21
I do that through them I
1:25:23
100% do that I have also
1:25:25
on slack you'll you'll message someone
1:25:27
that's offline and then you'll see
1:25:29
them pop online for like 10
1:25:31
seconds and then go back offline
1:25:34
and not reply I don't know
1:25:36
I just assume all communications asynchronous
1:25:38
like I don't know I'm bad
1:25:40
at replying at text I didn't
1:25:42
even know you were about it
1:25:44
replying to texts. Exactly. I do
1:25:46
think the steering is real though,
1:25:48
Ian. I see someone's read my
1:25:51
text like two days ago and
1:25:53
hasn't replied. I then have to
1:25:55
reread the text to be like,
1:25:57
did I say something wrong? I
1:25:59
used to send a voice message
1:26:01
in that case sometimes, just like
1:26:03
some wacky, something wacky that I
1:26:05
would say, you know, you used
1:26:07
a LinkedIn feature to just copy
1:26:10
a paste, anything. We need semantic
1:26:12
reads. It's not read. It's, I saw
1:26:14
the notification, I have not read yet.
1:26:17
I have. I used to like send
1:26:19
a voice message in that case sometimes,
1:26:21
just like some wacky, something wacky that
1:26:23
I would say. But now they have
1:26:26
transcripts for that. So they know what
1:26:28
you're saying. I feel like we should
1:26:30
just like lean into the passive aggressiveness and
1:26:32
like one of the replies you should be
1:26:34
able to do in like RCS or I
1:26:37
messes is just like red. Like you should
1:26:39
be able to just like mark a message
1:26:41
as red so that if you want to
1:26:43
leave someone on red you can just do
1:26:45
it intentionally. Like be passive aggressive aggressively. Like
1:26:48
not accidentally. You know because sometimes like... It's
1:26:50
not passive aggressive anymore it's
1:26:52
just aggressive. It's just aggressive
1:26:54
aggressive aggressive. It's just aggressive
1:26:56
aggressive aggressive. I guess aggressively.
1:26:58
Aggressive would be like, I've,
1:27:00
it would be like, replying
1:27:02
to anything like, I have
1:27:04
read your message, period, sent.
1:27:06
That's aggressive. It's like the
1:27:09
auto reply. You can put some
1:27:11
people's auto reply as a savage. I'm
1:27:13
on holiday. I'm on holiday. Full
1:27:15
stop. No, like, if you need
1:27:18
questions, answers, but just like, I'm
1:27:20
on holiday full stop. How do
1:27:22
you even have, or had the
1:27:24
audacity of interrupting my vacation. quickly read
1:27:27
something to make sure it's like, oh, is this
1:27:29
an emergency? Like something, okay, no, it's not
1:27:31
going to reply to us later. But like, sometimes
1:27:33
things will be like, oh, now you've read it.
1:27:35
And I'm like, and I was just peeking to
1:27:37
make sure it's not like, you know, someone's dying
1:27:39
or something. Like. Very good point. Because like if
1:27:42
you, if you get lucky enough to see them
1:27:44
in notifications, then you can peek at them without
1:27:46
being marked in the market in red. But if
1:27:48
like many messages have gone by and you
1:27:50
have to actually click into it, then you're
1:27:52
like, damn it, I made it red, but
1:27:54
I don't want, yeah, I get you. I just
1:27:56
write, I'll reply to this later, upside down, a
1:27:58
smiley face. But then you're never
1:28:01
going to reply to it later. Yeah,
1:28:03
but then I turn it to unread.
1:28:05
Oh, okay. That's a lot of work.
1:28:07
But it's worth it to make people
1:28:10
think that I'm not being passive aggressive
1:28:12
to them. Fair enough. Maybe I should
1:28:14
adopt that policy. This is a great
1:28:16
unpopular opinion you need. It spurred a
1:28:18
lot of conversation. In fact, both of
1:28:21
them. Lots of conversations had. I did
1:28:23
in fact read Ian during this, Ian,
1:28:25
during this too. So we got to
1:28:27
use both versions versions of read. It's
1:28:29
good. I have any others. Oh, go
1:28:31
for it. I haven't done. Of course
1:28:34
Chris, that's not. Of course I do.
1:28:36
So I think, it's like a twofer.
1:28:38
So the first part is, I think
1:28:40
the app protocol is, like, hands down,
1:28:42
way better than activity pub. And I
1:28:45
want activity pub to just go away.
1:28:47
Like, I think we should all just,
1:28:49
like, at protocol, well designed, fixes a
1:28:51
lot of the weird problems with activity
1:28:53
pub. Like, we should be moving. all
1:28:56
of our social networks onto it. Like
1:28:58
I want an activity-public version of, like,
1:29:00
TikTok, I want an activity-public version of,
1:29:02
like, use me of LinkedIn, of, like,
1:29:04
all of these. I want activity-public, or
1:29:07
not activity, but I want at protocol
1:29:09
versions of all of these things, right?
1:29:11
Activity-public, no, I keep saying it, wow,
1:29:13
geez, I want all of that. But
1:29:15
I also think that that. and this
1:29:18
is probably a popular part. We should
1:29:20
split all these social media apps into
1:29:22
like three different areas. I think there
1:29:24
should be like a client, there should
1:29:26
be like an algorithm, and there should
1:29:29
be like the the storage or the
1:29:31
like delivery system. And I think those
1:29:33
three things should be owned by three
1:29:35
different entities and be interoperable with each
1:29:37
other. Like I don't like that like,
1:29:40
you know, the whole tic-toc drum would
1:29:42
be going through. It's like... You kind
1:29:44
of just want the algorithm and you
1:29:46
want to be able to apply the
1:29:48
algorithm to like whatever content is out
1:29:51
there. I want to be able to
1:29:53
buy the algorithm or something like that.
1:29:55
And I also want to be able
1:29:57
to. like take the algorithm and use
1:29:59
a different app right like I want
1:30:01
I want to have configurability and I
1:30:04
think all of our social networks should
1:30:06
be designed so that that is possible
1:30:08
yeah I want an algorithm for Facebook
1:30:10
that is only people I'm friends with
1:30:12
and in a timeline and nothing else
1:30:15
and if they share an advertisement also
1:30:17
isn't there I want old Facebook but
1:30:19
basically yeah yeah I just want to
1:30:21
be able to see what old people
1:30:23
in my life are doing or the
1:30:26
yeah I guess it is mostly old
1:30:28
people on Facebook now Yeah,
1:30:30
but that's my unpopular opinion. I think
1:30:33
at Proto is the one we should
1:30:35
be going with and we should also
1:30:37
be splitting algorithms from clients from storage.
1:30:39
I think it was smarter than to
1:30:42
release the protocol before anything else. It
1:30:44
was pretty cool. Yeah, I mean, that's
1:30:46
the whole point of what they were
1:30:48
doing. So yeah, but. I haven't done
1:30:50
enough research on that Proto to know
1:30:53
like why it would be better to
1:30:55
like have everything kind of follow it.
1:30:57
I mean, I guess there's two big
1:30:59
things, right. One. you can actually migrate
1:31:02
your account without the participation of the
1:31:04
server, right? So activity pub, if like
1:31:06
the person that's hosting the server just
1:31:08
deletes it, your stuff's gone, right? You're
1:31:10
just done, finished. With at Proto, even
1:31:13
the server gets deleted, like the client
1:31:15
store all of the stuff, so you
1:31:17
can just migrate somewhere else. So you
1:31:19
can migrate without the participation of the
1:31:22
server. So that's a big thing. And
1:31:24
then two, because of the way relays
1:31:26
work, you can host your own server.
1:31:28
without getting bombarded if you get like
1:31:31
a viral post or something like that,
1:31:33
right? So it's like servers don't connect
1:31:35
directly to other servers, they kind of
1:31:37
go through relays, which are, you know,
1:31:39
you could see like in the future
1:31:42
maybe cloud flare or fastly setting up
1:31:44
like an at-proto relay so that, you
1:31:46
know, you can just pass through that
1:31:48
and, you know, that makes it a
1:31:51
whole lot easier for you to actually
1:31:53
just run your own infrastructure instead of
1:31:55
having to like rely on some other
1:31:57
hosting provider. or some other server. Those
1:32:00
are the two big reasons why I
1:32:03
like at Proto. I like it. I'm
1:32:05
still split on if social media should
1:32:07
exist at all, but you know. There's
1:32:09
an unpopular opinion. No more social media.
1:32:12
I mean, hey, we almost, we almost,
1:32:14
Tiktak almost, you know, disappeared for America.
1:32:16
So, and people were upset. So I
1:32:19
don't know if that's. How else would
1:32:21
you connect with other people that you're
1:32:23
not in geographical? I mean, how else
1:32:26
are you going to find your news?
1:32:28
Twitter died. Just don't. Just don't. You're
1:32:30
only allowed to have friends that live.
1:32:32
You're not my friend. I mean, I'm
1:32:35
kind of okay with that. Always connecting
1:32:37
with people across the world all the
1:32:39
time. Because sometimes we focus on, we
1:32:42
make the world's problems, our problems, very
1:32:44
quickly, when there's community problems we should
1:32:46
be focused on first. So like, I'm
1:32:49
not fully opposed to it. But I
1:32:51
do have a lot of people that
1:32:53
I know that are not geographically situated
1:32:55
near me. So I'm just like not
1:32:58
going to talk to my family then,
1:33:00
no, cool. No family for you. I'm
1:33:02
speaking just for me? I'm not saying,
1:33:05
I'm not saying, I'm not saying you
1:33:07
can't have a phone anymore. Like, you
1:33:09
can still call people. Certified mail. Certified
1:33:12
mail only. I mean, I don't really
1:33:14
know how to post weird memes on
1:33:16
Facebook. She does not know how to
1:33:18
answer a phone. You know, yeah, you
1:33:21
can text people, but I'm not going
1:33:23
to just like randomly send photos to
1:33:25
like everybody in my contacts, but I'll
1:33:28
like post photos on Instagram or post
1:33:30
a story on Instagram. Oh, I love
1:33:32
when people just send me photos. Should
1:33:35
I just start sending photos? Yeah, you
1:33:37
go on, see something cool. Send it
1:33:39
my way. I love, I love, I
1:33:41
love when people do that. I do
1:33:44
think it's a good way to like
1:33:46
reconnect though. Like I will see some
1:33:48
like weird or funny or interesting post
1:33:51
for a interesting post for a friend
1:33:53
I haven't. Oh, that's really cool. Where
1:33:55
were you? Well, oh, congrats. You got
1:33:58
married. Whereas I wouldn't just like and
1:34:00
you didn't people would but I wouldn't
1:34:02
message but I wouldn't like message someone
1:34:04
I will to be honest I might
1:34:07
have forgotten the existence of this person
1:34:09
were it not for this strange holiday
1:34:11
pick that I saw so I think
1:34:14
it's a nice way to remind you
1:34:16
of like the people in your life
1:34:18
and take you out of your little
1:34:21
bubble I will say that like I
1:34:23
maybe this isn't a popular opinion as
1:34:25
well that like I vastly prefer to
1:34:27
give like social media out to people
1:34:30
that you like meet randomly like if
1:34:32
you meet people at bars like oh
1:34:34
you meet someone I'd rather like give
1:34:37
out my Instagram than my phone number
1:34:39
because Instagram then like as you were
1:34:41
saying that's that's completely fair maybe they
1:34:44
have a story or a post they'll
1:34:46
be like oh hey you can like
1:34:48
start up a conversation with them but
1:34:50
like text message just gotta like blindly
1:34:53
text people and that's always awkward so
1:34:55
I'm just like and then I given
1:34:57
more people my phone number I don't
1:35:00
want to do that like yeah vastly
1:35:02
prefer like you know Instagram or whatever
1:35:04
so we can just you know chat
1:35:07
if we want to chat or we
1:35:09
can just like passively just watch each
1:35:11
other stories and never talk to each
1:35:13
other and that's you know fine little
1:35:16
weird I think we've become way too
1:35:18
scared of awkwardness but another unpopular unpopular
1:35:20
opinion anyways I feel like we've had
1:35:23
a plethora of unpopular opinions and we've
1:35:25
discussed them at length, so I don't
1:35:27
think we need to have. And you
1:35:30
know, mine would be brilliant, so take
1:35:32
another half hour. I think we can
1:35:34
cut it off here. Yes, oh God,
1:35:36
this is already so long. I'm sorry
1:35:39
Chris. Oh it's fine. I'm just used
1:35:41
to editing. You know, it's one of
1:35:43
those things where you get in the
1:35:46
editing session and it's just like, oh,
1:35:48
I'm an hour in and you're like,
1:35:50
I still have 40 minutes to go.
1:35:53
It's okay though. It's okay. I'll make
1:35:55
it. I'll make it through. I will
1:35:57
persevere. Any closing remarks? Yeah, one big
1:35:59
announcement that will be announced by this
1:36:02
podcast ships, or this episode ships, is
1:36:04
that Fall Through has joined the Change
1:36:06
Log podcast universe. So we are... over
1:36:09
on there. I don't know if the
1:36:11
super feeds, when like the super feeds
1:36:13
gonna be up or anything like that,
1:36:16
but yeah, we are we are on
1:36:18
change, we're part of change, the change
1:36:20
log podcast universe. And you should listen.
1:36:22
I mean, I mean, I don't know
1:36:25
how to scrap for me for for
1:36:27
listeners, of course. No, I know for
1:36:29
our listeners, I don't know how to
1:36:32
scrap you. I guess it's like, you
1:36:34
know, what change log was where they
1:36:36
had like go time and all of
1:36:39
all of time and all of that.
1:36:41
just with shows that they don't produce
1:36:43
themselves. So high quality developer pods and
1:36:45
you know all of the nice like
1:36:48
chat channels so we'll be in their
1:36:50
Zulu up on slack for as long
1:36:52
as I still have that all of
1:36:55
that cool stuff. And if you want
1:36:57
to hear more about this you should
1:36:59
tune into the Change Log and Friends
1:37:02
episode that shipped on Friday where Matt
1:37:04
and I will be talking with Jared
1:37:06
and Adam about this wonderful announcement. What
1:37:08
date is Friday? Just in case I'm
1:37:11
listening to this and... The January 20th.
1:37:13
Is that it? That sounds right. No.
1:37:15
Yes, because today's, yes, yes. Or it'll
1:37:18
be, like, what's the current, hold on?
1:37:20
The 20th today. It'll be out there
1:37:22
somewhere. It'll be changed log and friends
1:37:25
episode number 77. Perfect. All right. All
1:37:27
right. Yeah. Take us home, Ian. Yeah,
1:37:29
come on. Thank you all for chatting.
1:37:31
I don't have any taking home line.
1:37:34
Thanks for listening. Comment if you have
1:37:36
ideas about complexity. We would love to
1:37:38
hear them. And see you next time.
1:37:41
Yeah. I mean, I guess we should
1:37:43
say we're trying to do a mini
1:37:45
series on complexity. So definitely send us
1:37:48
all your thoughts. And if there's anybody
1:37:50
that we should talk to. that you're
1:37:52
like, oh, I really want to hear
1:37:54
this person's opinion on complexity, send it
1:37:57
our way, you know? Send us to
1:37:59
us on Blue Sky or in the
1:38:01
change log Zulu. or I don't think
1:38:04
we have email set up, so not
1:38:06
by email, but just find a way
1:38:08
to reach out to us. Or you
1:38:10
could leave a comment. Or if you're
1:38:13
watching on YouTube, leave a comment. So
1:38:15
we should do. Also subscribe to our
1:38:17
YouTube channel. You would greatly appreciate it
1:38:20
and like turn on the little notification
1:38:22
bell while you're at it. And yeah,
1:38:24
I think that's all the thing. Yeah,
1:38:27
like, comment, share, subscribe. Those are the
1:38:29
things you're supposed to say. Yes. Send
1:38:31
a letter. Send a letter via certified
1:38:33
mail If you can find the p.o.
1:38:36
box now now that I've said that
1:38:38
twice people are actually gonna spend a
1:38:40
lot of time. Should we should we
1:38:43
offer a? reward for the first letter
1:38:45
we get to our p.o. Box What
1:38:47
kind of reward? I will wait. What's
1:38:50
first mean first by certified mail time?
1:38:52
I guess first by when I'd pick
1:38:54
it. What if I find it? Postmark
1:38:56
date Create it and then physically walk
1:38:59
it to your door, Chris. It's a
1:39:01
PO box, not a door. I've certified
1:39:03
the mail via me. I would love
1:39:06
to get some letters. That PO box
1:39:08
has been sitting empty for a while.
1:39:10
Send me some stuff. Nice letters. Send
1:39:13
nice letters. No weird stuff. So no,
1:39:15
actually I'm not going to continue that.
1:39:17
On that note, you're going to make
1:39:19
a person more this explicit. This is
1:39:22
why I'm not a host anymore. Goodbye,
1:39:24
everyone. Goodbye. See you next time. Oh,
1:39:26
I found it! That fast!
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More