Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:20
Hi, Ben.
0:21
Hey, Matt.
0:22
How are things?
0:23
Not too bad.
0:24
Cool. um Well, unusually for us, we have a topic before we started today, because yesterday, you and I got coffee, and I got excited.
0:31
Yes. Uh-huh.
0:35
And then we both were like, I wish we'd recorded this. So we're going to try and have that same conversation again, with the same level of excitement and enthusiasm, which you know is a bit of a struggle for us both.
0:42
Mm hmm.
0:47
we're We're not very passionate about what we do.
0:47
Right. We're not normally enthusiastic people.
0:51
No.
0:51
So every moment of this podcast has been pure pain.
0:52
So we'll we'll have to try that. but
0:56
I'm just going to admit that up until up until I've been keeping it hidden up until this very moment.
0:56
like i
1:01
Until this second and now the public revealed to the public.
1:02
Uh huh.
1:03
Yeah. I mean, we've already told everyone that this is like extremely low effort podcasts.
1:04
Yes.
1:07
We go out of our way to make it as as simple as possible for ourselves, even though, you know, I still like the fact that we do transcripts and that we do edit them a little bit.
1:11
Right.
1:16
I try and make my dog not make a noise in the background.
1:16
Mm hmm. Good luck.
1:20
I know ah he's just started moving. I can hear him. We'll see how this goes.
1:24
Mm hmm.
1:26
But today, yeah I wanted to talk to you about some cool programming stuff that I've actually been doing.
1:29
Mm hmm.
1:30
you know after our last conversation where we said, hey, it's kind of cool to be a programmer and not be a people manager, that has come to pass in the last few days.
1:33
Yeah.
1:38
Mm hmm.
1:41
um so Yeah, let me let me phrase it as a puzzle to you.
1:47
OK.
1:47
So this is these are the set of constraints that you have. You're writing a piece of code where you have many threads, possibly many processes. There are some important critical pieces of information that you need to share between these various programs.
2:06
And it's imperative that at no point do you prevent the producer of up the this information from being stopped or blocked by anything.
2:07
OK, yeah.
2:15
okay yeah
2:16
You only need the most recent ah like output, so imagine it is, let's say it's the current, ah the current ah S&P 500, right, whatever the S&P 500 is, something's happening, you're measuring it, and then you're writing the value to to somewhere that many readers are then going to read from.
2:26
Mm hmm. Yeah.
2:35
But obviously, ah that this information is bigger, big enough that um you can't just write it in the hope that everyone sees, um like, the the output
2:45
Right.
2:46
exactly as you wrote it in, right?
2:46
It's it's many bytes.
2:48
It's many bytes, right, exactly.
2:49
Uh huh. Yeah.
2:49
So um how do you design a system where you can share small, but not trivially small pieces of information between threads where you never block the writer, um the readers need a fast access to it, but you can you don't mind if they have to if if they have to like do a bit of extra work.
3:09
So there are actually many solutions to this, um but what what comes to mind?
3:15
Well, I mean, you haven't mentioned anything about the latency constraints.
3:21
Correct. Yeah.
3:22
So given the given the fact that there are currently no latency constraints, I can think of many solutions.
3:22
Yeah.
3:27
Well, actually, go on then. So let's think of a ah latency constraint where we don't ah block the writer. at No point do we block the writer.
3:33
yeah
3:34
So think of, well, what can we do?
3:34
ah huh ah So going back to one of our previous episodes on building boring things, files, ah you could use a fan out queue.
3:45
Uh-huh.
3:46
You could use um something that ah if you don't care about sort of like um like if ah if a receiver of this information is expected to handle a situation in which it's falling behind by just losing information,
4:05
Right, which is absolutely the case. and is We only care about the most recent for the readers.
4:07
Yeah. You could use some sort of network broadcast to just broadcast it out.
4:09
Yeah.
4:12
And it's like, if you're listening and you get this, that's cool. And if you don't, then you don't.
4:15
That is cool. Yeah.
4:17
Um, which has some, some of the same properties as a fan out queue, but isn't, isn't exactly the same thing.
4:22
Yeah.
4:25
Um, you know, uh, those are the, those are probably the top three, again, modulo, no latency constraints.
4:33
those ah That's where I would probably start as one of those three.
4:34
would i mean if you if you If you drop, write RAM constraints and things like that, you could just ah have an append only like Q and then the readers only look at the most recently written to.
4:43
Yeah, right. Well, that's what I was saying. My first one was files, right?
4:47
Yeah.
4:47
Like here's a file start, you know, the messages are fixed length. So started bite zero and read. And then when you hit the end of file, try again, right.
4:57
Right. And then if you read from if you want to read the latest, you don't care about the intermediates, which is passing.
4:58
And just keep going.
5:02
Yeah.
5:02
I didn't state very well. Then you seek to the end of the file.
5:06
Yeah.
5:06
And if it's not a not a not a multiple of the object size, then you know that you've kind of caught the writer while it's still writing. So you mod it something like that.
5:14
Yeah, yeah, yeah, that'd be really dumb and would work.
5:15
And that gives you some amount of. Exactly.
5:19
Yeah.
5:19
Yeah. And all of those things I think you could make work. um But yeah, then to get round back to the obvious part here is that like we don't have infinite RAM files are probably not a good choice.
5:30
And we're looking for the lowest possible latency. What is the fastest possible way that you could share this information? Again, the writer probably is going to be updating this very commonly. It's, you know, some derived financial thing, and then the readers from time to time need to be able to look at it and kind of say, is this what what is the current price?
5:49
Maybe it's even like a web but you know a web view of like, what is the current S&P 500? And you're like, okay, I don't care how long it takes my web service to um to to read the current value out, it's but it is it least a consistent value.
6:04
Right.
6:04
um It's not like half of... ah yeah That's right, yeah.
6:05
Yeah, you don't get half the bytes. You're like, wow, the market looks cheap today.
6:09
Yeah, that's a very bad mistake that you make usually only once in a career.
6:14
Yeah.
6:15
um So yeah, we've the one of the solutions to this problem is to use a relatively unusual and a fun construct called a sequence lock. It's used in ah a bunch of places um in the Linux kernel. um So it's kind of well-worn and well well-trusted. um But it has some interesting characteristics. So let me explain to you what a sequence lock is. And then we can talk about why it's kind of difficult to get this right and why I've spent a week trying to write one and not sure whether or not I have actually got it right or not.
6:50
so So the trick is this, if you are a writer, yeah you, well, first of all, the piece of information is in a shared piece of memory, and it's the same piece of memory for that, um like, we're not using it like a queue, it's just one place to look for a particular value.
7:08
So like, this is the S&P 500, it lives at this address in some piece of shared memory.
7:12
Okay.
7:12
And if you've got lots of
7:13
So the readers and the writers in this context are, have this shared memory. So they both have access to this memory, which means they're at least running on the same machine.
7:21
Correct, yes, that is right, yeah.
7:22
Uh, they may not necessarily be in the same process, but they're running on the same physical computer or virtual computer.
7:26
That is correct for this. Yes.
7:28
Yes.
7:28
Yeah. Same. fitgejit Well, or yeah. Virtual computer or whatever.
7:31
yes Yes. ah
7:32
I mean, modulo, there are like, you know, tricks to do ah RDMA and things like that across networks and stuff, but no, what we're talking about here really genuinely is a same computer shared Ram chips somewhere in the, in the system.
7:44
Right. And it's the same address.
7:44
Yep.
7:46
Uh, so, and as well as storing the piece of information that we want to protect, which has, you know, as I say, is like eight, 16. Well, if it's eight bytes we probably wouldn't worry about this, if it's 16-32-ish, that kind of size of bytes.
7:56
Mhm.
7:58
We also store a sequence number. Every time the writer goes to write it increments the sequence number, it updates the data inside the block that's being protected, and then it increments the sequence number again.
8:13
We'll get to why that's important. um lots of Anyone who's listening who knows roughly what I'm talking about here has like already seen all the problems with the things that we've just talked about. But let's talk about what the reader does.
8:23
The reader reads the sequence number. ah if If it's odd, the reader the yeah the the reader knows that the writer is in the process of updating it.
8:34
Okay, yeah.
8:34
So the reader says, I can't look.
8:36
Right.
8:36
And it just waits a bit and tries again.
8:39
Mm hmm.
8:39
So what we've got is very optimistic kind of attempt here. If it's even, we take a copy of the protected data, but that doesn't mean to say that it wasn't while we were copying it being updated by the writer.
8:42
Yeah.
8:53
Mm hmm. Right.
8:55
So we've just read a bunch of data. We can't look at that data yet. What we're going to do is we're going to look at the sequence lock again. And only if the sequence lock, sorry, the sequence number matches the first sequence number, the you know, when we read it, then we know that we got a good copy of the data that was being shared. And now we can look at it. It's safe to look at it. We know that the the writer wasn't in the process of updating it, nor did the writer get and do an entire update while we were taking a copy of the information.
9:22
Right.
9:24
ah Obviously, if it if the sequence number changes, we have to retry.
9:28
Right So what happens if you have the slowest reader in the world? It's like a dude with an abacus.
9:35
Yeah.
9:36
And every single time you go to read the sequence number a second time, it's different.
9:42
Then you are in trouble, right? It is dabs.
9:45
You're just never getting this data. You're just not, you're not fast enough.
9:46
It is absolutely.
9:48
You need to be faster.
9:49
That is correct. Yes. So like on in in terms of testing in the machine that I've got, I can get it to the state where if I have a writer that is fast enough that literally goes back to back doing modifications, then it is possible to completely starve out the reader.
9:50
Yeah.
9:53
Mm hmm.
10:05
I say completely starve out every few microseconds, the right, the reader was able to sneak in, but this is with a very fast reader, right? on And on purpose, what we're talking about is a tiny data structure where you're saying that these are bytes that I'm going to like mem copy out of shared memory. So that like, you're kind of hoping that like,
10:22
Even if you're going to do something slow with it, you're going to do something slow with the copy that you got, the good copy that you got of the data.
10:27
Yeah.
10:28
And so it shouldn't matter too much. But it's it is possible. That is absolutely a trade-off you're making here. The writer can starve out the readers, but computers are super fast.
10:39
And provided your writer isn't doing something pathological like sitting in a tight loop doing an update, even the loop overhead is enough to give the reader a decent chance of sneaking in from time to time, at least on a computer timescale.
10:52
But it's absolutely possible, yeah. um And if you care about these things, you probably need to make your API not retry or have a maximum number of retries and sort of fail for the reader and say like, hey, this didn't I tried a thousand times, it didn't happen.
11:03
Yeah, right.
11:04
And then the the reader could make a ah choice about it. So it's an interesting thing here. So you again, the writer never has to block the writer is able to just write unconditionally.
11:15
So for example, if the writer is processing a stream, a torrent of of incoming UDP market data as we do in our world, then um it's imperative that you never block because if you block for any length of time, you might miss some information on the network and then you have to go through an expensive recovery process.
11:32
So it has a great property for the writer.
11:32
right
11:34
And for the reader, it has a bit of a tax. It's not free to do this check, but but
11:37
Mm hmm.
11:41
If the ratio of times the reader and writer are both looking at the same thing at the same time is effectively de minimis, which hopefully it would be, then essentially it's free on both sides.
11:45
Mm hmm.
11:50
Mm hmm.
11:53
It's a called it's a called data structure. it's used ah my My understanding is is it's used in the Linux kernel to protect some of the data around the system time.
12:00
Mm hmm. Hmm.
12:02
So the kernel has some configuration information about you know how fast the clock's running and all that kind of good stuff. um It's bigger than it can atomically switch out, which is ah things that CPUs can do.
12:15
um um And it it's mapped into every process so that when you do get time of day, you don't actually have to do a system call. You can just read these numbers out directly of a piece of memory that's read only in your process.
12:22
Oh, cool. Yeah.
12:26
ah But the kernel can update it when, you know, NTPs fiddling with the times, or, you know, when there's overflows in some of the counters, the hardware counters that you need to use to actually work out what the current time is. But um yeah, for the most part, the kernel doesn't touch that information.
12:39
But if it does, it needs to do it in a way that's safe for all the processes to read from, and they don't mind doing a retry. So that may be butchering the memory of exactly where that's used. But it's something something like that.
12:48
Mm hmm.
12:49
it's it's ah It's a cool thing to do. um So
12:53
Hmm. So this reminds me there's a Sylvester Stallone movie. I don't remember if it's Judge Dredd.
12:59
Whoa, was not expecting it to go this way.
13:01
Yeah, it's it might be. I think it's Judge Dredd. It might be Demolition Man. I don't remember which. I think it's I think it's Judge Dredd.
13:09
I've only seen one of those, so... and
13:11
But there's a robot walking around selling food and it's recycled food. And there's this like thing that's playing. It's like recycled food is good for the environment and OK for you.
13:23
And so this data structure to me sounds like this data structure is good for the writers and okay for the readers.
13:30
That's basically it. Again, though, it does depend on the duty cycle of the writers and readers.
13:33
Yes.
13:36
And so even at the scale that we work run out in finance, there are many microseconds in between packet updates. ah Many single-digit microseconds in time packet objects, which is more than enough for even the most slow reader to get in and get a chance But even if they're delayed for a microsecond or so um So it's it's usually a win-win situation and so it's fun.
13:47
Yeah.
13:52
Right. Right. Yeah.
14:00
It's fascinating. It's interesting but the real trick comes from the fact that um Everything that both the compiler and the CPU want to do on your behalf ah to optimize your code is at loggerheads with this very strict ordering of of events.
14:13
Yeah.
14:16
So like i as I described to you, absolutely the writer absolutely has to bump the sequence number first.
14:18
yeah
14:24
Then it needs to change all of the data that it's doing. And it's got a certain amount of time to rummage around changing the data. Again, it's blocking the write reader all the time it's doing this.
14:29
Yeah. Yeah. And the sequence number is odd while this is happening, which is the signal to the reader that it shouldn't be reading.
14:34
Correct.
14:36
Right.
14:36
That's correct, yes, yep.
14:37
Yep.
14:38
And this ah only allows for a single writer. There are extensions that relatively trivially let you have multiple writers for what it's worth.
14:41
Right.
14:43
And then at the end, once you finish rummaging around and modifying and mutating your your data, you need to bump it again to both make it even again to signify that it's okay to start reading it and to be one higher than or two higher than when you started so that anybody who got a copy in between you modifying it also notices and knows to re retry.
14:53
Uh-huh. Right.
15:03
um But those absolutely have to happen in that order. And if anyone's ever, you know, for folks who are used to like regular programming languages, like, you know, Python, the like, you know, it's like, you know, you write a sequence of statements, they absolutely do happen in that order, right?
15:18
Uh
15:18
There's not anything particularly exciting. If I said A equals one, B equals two, A equals three.
15:22
-huh.
15:24
Uh-huh.
15:25
no At no point could you pause the CPU and look and observe that A was ah ah three and B was two or whatever. there's not like Sorry, that's a really bad example and it needs a picture to be for me to even get it right in my head.
15:35
yeah
15:39
and source of Yeah.
15:39
But there's nothing funky going on there. It just does things one after another. But um for an optimizing compiler, it's very convenient to be able to play fast and loose with all of the operations that you're doing, right?
15:54
Right.
15:54
If you can pull forward something that's expensive that you know you're going to do later on, then you can start like a big divide or a multiply or whatever, while ah you're doing some other setup work.
15:58
Yeah.
16:04
And then when you go to look for the result of the divide, it's ready for you.
16:06
Uh-huh.
16:08
Hooray, compilers want to do this all the time. So they want to be able to move your the the instructions around.
16:10
Uh-huh.
16:12
And you need to be able to tell the compiler, no, you can't move these otherwise unnecessary looking memory operations.
16:19
Right, right, right. Is there anything that you can use to indicate that you're writing into memory that is shared potentially shared with another process? So that the compiler knows it's like i'm I'm doing this for the side effects on purpose in a way.
16:34
There is, I mean there there are so specifically in terms of C++ which is where I am strongest in this regard. um A way to model this is to use the volatile keyword. That is not a good way but it is a way so volatile is on the path to being deprecated because nobody can really describe what it's for or how.
16:57
But what it essentially says is, don't make any assumptions about this memory location.
17:01
Okay.
17:02
And when you're writing a device driver, if you know you're writing some very low level stuff, then having a pointer that is a volatile pointer to something that's like maps to, I don't know, a temperature sensor so that when you read from it, you're reading the temperature.
17:16
It's not real memory. It's just a thing that that's a great candidate for something to be volatile.
17:20
It says like, hey, this can change outside. Every time I say to read or write to this, you can't optimize about it. There's nothing you can do. Just do it.
17:27
Yeah.
17:28
So that is a way to do it.
17:28
Mm hmm.
17:30
But most compilers will see the word keyword volatile and then start slowly backing away from your code.
17:39
They Homer into the bushes.
17:41
They homer into the, but well, certainly the optimizer does.
17:43
Yeah.
17:43
The optimizer says, oh, there's a volatile here.
17:45
Uh huh.
17:46
Um, so as not to anger the gods of volatility, we're gonna, we're gonna not do anything here. Right. So in your very critical code, you might, uh, arguments obviously, ah um, exist for like, don't use volatile for this, for other reasons, which we'll get to in a second.
18:02
But um specifically for shared memory, where another another process outside of the program you're compiling could be changing it, there's an argument that says volatile is the right way to model it. So the short answer is for shared memory like this, there isn't really a good way for C++ to say, hey, this bit of RAM outside of this program's remit will change under your fee.
18:22
Mm hmm.
18:23
Right. So the best we can do is we model it with atomics. And atomics have a whole bunch of things that come with them, right? One of the themes is atomicity at the hardware level.
18:33
Like if I read and write to an atomic variable, then it either happened or it didn't happen. You'll never see half of it written, you know, like if you've got ah an eight byte sequence number or a four byte sequence number.
18:42
Yeah, right.
18:44
There's no world in which I write that, and somehow only the first two bytes of that four byte value have been written. That's what, at the basic level, an atomic operation is.
18:50
Yeah, yeah.
18:53
And the the compiler will collaborate and generate the correct CPU sequences, or it will make it an error that this can't happen. Like, you hey, you've you've got a 128 byte structure.
19:05
that We can't do this atomically.
19:06
Yeah, you can't guarantee that. So yeah.
19:07
Now that that being said, actually what it will do is it will actually use a spin lock to do bigger ah bigger things.
19:13
Go.
19:14
So there's there's a way of saying, please, um you can assert statically at compile time that like, hey, can I do this without some kind of other lock being taken out? Because obviously the whole point of this is not not to have a lock.
19:22
Uh huh.
19:24
So it would be really tragic to to it ah to to have something.
19:24
come
19:27
But all all modern CPUs will let you write four bytes and probably eight bytes. And with some some like very big caveat and footnotes, 16 bytes using some tricks. um But you can do those atomically, provided they're a line, provided they they they're inside a cache line and they don't straddle two cache lines, all these kind of things. But like we can assume that we can do that, but it's only a small amount that we can do that. But um another thing that with the C++ memory model is that the atomics also model the fact that this could be being written or read from other threads.
20:00
Now, obviously, if we talk about multiple processes, that's kind of a stretch to call it a thread.
20:05
Right.
20:05
Um, but it's the kind of the best we've got right now. If you wanted to share between processes, it's only the thing that I've, I've gone with here. Now, automics are funny because, um,
20:19
Reading or writing to an atomic doesn't necessarily or sorry reading or writing atomically to memory doesn't necessarily um ah follow that the compiler isn't going to reorder things. It just means that you're literally reading you know writing to this memory location. So back in my atomically, um in the example of the A equals 1, B equals 2, C equals, sorry, A equals 3,
20:41
Mm hmm.
20:41
ah thing. You could make those all atomic, and I'm saying not this is not C++ atomic. I mean, atomic in terms of like the CPU can read and write these memory locations, and it either happens or it doesn't happen.
20:52
um And without telling the compiler, oh, but you couldn't like, elide the fact that A equals one ever happened. You can't throw the A equals away because you're about to run it overwrite it with a three later on. um That's a separable thing.
21:02
Right, yeah.
21:03
But in C++, they're sort of bound together. And so when you say something's atomic, you're also saying something about the operations that can happen with um the the memory system, where the memory system encompasses both the compiler's ability to move things around, and more critically, the CPU's ability, which we'll talk about in a sec.
21:24
So in the example of um setting A to 1, setting B to 2, and then setting A to 3, if they were all atomic, the compiler wouldn't throw away the A equals 1 at the top, because it's like, no, this had some semantic meaning to something I can't see.
21:39
Right. Yeah.
21:40
um At least by default, it won't do that. um the The default in C++ is to use an incredibly expensive ah sequentially consistent view for atomic operations, which means that the atomic operations and the code around them happens in a the one true order for even if you have multiple threads doing it, which is expensive potentially on some hardware that makes that an expensive operation to to serialize. you know You could potentially imagine that like um in order to make sure that um Everybody sees A equals 1 before anyone sees B equals 2.
22:15
you have to kind of like add in instructions or like some other um um sort of ex machina way of saying all threads need to have, make sure that they are, they're good. The caches are all consistent across all the threads.
22:25
Right, right.
22:26
And now now I can write to B and then we go again, hey, has everyone got the B equals two if you care about it and so on.
22:29
Mm hmm.
22:32
And that can be a very expensive operation. And so there are these different ways of of acting with atomics where you can give sort of ah a set a secondary semantic meaning that's less strict than this sequentially consistent view.
22:44
Mm hmm.
22:45
um Like, for example, you can say, this is an acquire. So when you read, ah ah when you store a number in, you could say, this is an acquire operation. I am i am um ah i um um acquiring a lock is the way I like to think about it.
22:58
Mm hmm.
23:00
And then when you finish with it, you release it. And again, it's not like this has any, I mean, we're not actually acquiring any locks or anything like this, but it has the semantics of a release operation and an acquire operation. And what do I mean by that? Well, if you if you're taking out a lock, it's a pretty big hint that that anything after that is under the purview of that lock.
23:20
Right.
23:21
So if you've got 10 lines of code and line 5 takes out a lock, line 6, 7, 8 and 9 can't be shuffled above the lock because otherwise you're doing some things that you were trying to protect outside of the locked area.
23:21
Yes.
23:27
Right. Right, right, right. Yep.
23:31
And then similarly the release says nothing above the release can come south of the release operation itself because again you want to protect things that are bounded by an acquire and and and a release sort of pair.
23:43
Yeah.
23:43
There are some other aspects to this to do with different CPUs and different threads and things that are much more subtle in this in this respect. But um that is sort of the gist of it. And then you can also have like a relaxed operation, which is I need it to be an atomic operation, but I don't really make any guarantees about who which CPUs might see which values as long as there is still atomic. That's more like the traditional original atomic that we were talking about, where it's just make sure this happens at atomically at a hardware level, but other otherwise imposes no other restrictions on the order of things.
24:11
So i it's easy to think about it from where I'm sitting in terms of acquire and release. um And so you could imagine that in our case of our writer here, we write the incremented version, the value, with an acquire.
24:26
We do all of our rummaging around, and then we write the the the secondary plus the the second increment with a release. And it kind of tells me, my my processor, that like everything I did between the adding one and adding two to the original number, the first increment and the second increment, is for all intents and purposes a lock.
24:46
Yeah, yeah, yeah.
24:46
And that makes a lot of sense, right? Unfortunately C++ has some awkward very specific to C++ things that make that not true, but that's fine. But logically, like in a that that's that seems to make sense.
24:58
And on the reader, it's a little bit more difficult. You're definitely reading to acquire both times. You're like acquiring a lock, lock, inverted commas, then you're reading out stuff, and then you're reading it again.
25:10
But again, it's a little bit more subtle than that, because the second thing is also a release in a way. Because if you did get a good copy, you need to make sure that nothing of the copy leaks the other side of the the acquire.
25:15
Yeah.
25:21
It's easy with a picture, but it's subtle. It's subtle and difficult to get right, which is why it took two solid days and lots of testing. And that that brings us to the next bit here. So it's very straightforward. ha It's somewhat straightforward to explain it without a picture and without a whiteboard around um in terms of this sort of hand waving um Things the compiler is allowed to reorder it's much much harder to prove that you got it right Because the output of the compiler may or may not encode this a bit this this like optimization barrier that you've put in you all you've said to the compiler is like hey by saying by putting this acquire in here I'm saying that no no read or that happens after this acquire can be reordered by the compiler ah north of this this read or whatever the the acquire operation. But I can't all I can I can get sort of circumstantial evidence by looking at the disassembly of it and kind of go it doesn't seem to have done anything there but I can't tell that in general it is not able to do that because
26:25
Yeah, right. Right.
26:29
It depends which program I put it in. If I put it in a big program where lots of inlining happening happens and all that kind of stuff, I don't know if I have hinted to the compiler correctly that this is in fact the right thing to do. um So that was a real challenge. um And we've talked a little bit about testing these kinds of things before.
26:48
Yeah, testing multi-threaded code in general is not easy.
26:52
No, exactly.
26:53
Yeah.
26:54
it's It's very difficult and you can, all you can do is kind of develop a certain amount of confidence that you haven't got something egregiously wrong.
27:01
Right. I mean, it's the same. I think it's the same thing with like looking at your compiler output, right? The best that you can do in that case is see that you were wrong. But if you fail to see that you were wrong, you don't know that you're right.
27:16
That's, that's exactly it. Yeah.
27:19
So you you tend to do, and I don't know if this is true of of the sort of you know decompiled analysis, but you know you you can you can do a lot of things where you give yourselves lots of opportunities to see that you're wrong.
27:34
you know Run the code on different architectures, run it with different sets of data, run it you know multiple times over and over again, different compilers, different compiler settings, I'm sure.
27:39
different compilers even, yeah.
27:43
Yeah, yeah, yeah, for real.
27:44
And you sort of like look for all of the possible opportunities to prove yourself wrong. And then you fail to prove yourself wrong. And and then you just kind of give up and admit that you might be right.
27:54
That is exactly how, yeah, that is exactly the ah the sort of the approach that the I've been taking, which is why it's taken
27:59
Mm hmm.
28:02
quite so long apart from just reasoning about it in the first place.
28:04
And interesting you mentioned about architectures there, because one of the gifts to ah programmers like myself of the Intel domination in server architecture at least, is a very strong memory ordering.
28:17
Hmm.
28:22
So one of the really cool things, I'm sure we've talked about it before, and certainly anyone who's seen any of the nonsense that I've put on the internet knows that I love the kind of amazing tricks that the the CPU is pulling off under the under the hood.
28:22
Mm hmm.
28:34
And one of the one of its sort of real trump cards that has caused problems along the way ah is its ability to reorder and speculate instructions out of the sequence that you gave it them.
28:37
Uh huh. Yeah.
28:47
um Very much like the compiler itself does, the CPU is able to find flows and sequences of instructions and reorder them to take better advantage of the fact, hey, I have a spare multiplier going now, but look, I can find a multiplier that's like 300 instructions in the future, and I know what inputs are going to go into it, so I might as well start it now.
29:02
Mm hmm.
29:05
All those kind of tricks. It's amazing. But when it comes to loads and stores to memory, it doesn't reorder those. Or if it does, it doesn't in a way where it can track if it got it wrong, and it can redo them later on, which is ah takes an a bonkers amount of silicon to do.
29:15
Hmm.
29:22
But it does mean that in general, if I go load load store on one thread, and on another thread, I go, ah no, yeah, some, some, I wonder, it's so difficult to explain this over without without a picture, but the The sequence of operations will make sense there effectively for most intents and purposes there are exceptions this sort of effectively so sequential um even even in the presence of of these kinds of atomic like operations where you've got what thread a doing one thing and thread be doing another thing it's very hard to catch it out doing the wrong thing. So for the most part
29:57
even the naive code that's got it wrong in air quotes won't be wrong on an x86 because the compiler short of it doing its own optimizations, the code that it generates will will will run in the boring, dull order that that of loads and stores and increments and things that that you you gave it.
30:13
Mm hmm.
30:16
um That is not true in general of every other CPU on the planet.
30:22
Because Intel loved to be backwards compatible, and presumably the very first time they went multi-threaded, they went, oh, this seems like a useful thick property to have, unaware that it was going to hamstring them for like the next 25 years. But ARM CPUs, RISC-V CPUs, MIPS, and anything else, nothing else that I'm aware of, has such a strong memory ordering. That is to say, if you do loads of a bunch of stores and loads on one CPU,
30:49
um They could be reordered with respect if you are able to observe them on another CPU Right, obviously, they're not willy-nilly reordered to make your program wrong on a single threaded case, right?
30:53
Yeah, yeah, yeah. Okay. me
30:59
That's not what I say here I'm saying that like, you know, hey if you do load load store and the the CPU um ah It won't read all those loads and stores from where you're sitting to give you the wrong answer but you might see something weird and wacky on another CPU the I think that the there's a sort of cases if you ah
30:59
Right.
31:16
ah do sort of like remember the value of a on one thread and then write b to be one on one and then on the other one you go write b to be sorry but write a to be one and then remember the value of a you can see a case where they both observed a and b to be zero and then they both wrote one to the other and there shouldn't be a way that they could do that right ah again paper this is terrible podcast material i'm sorry
31:39
So anyway, for on those architectures, the um the atomic operations and the sort of like this acquire and release or acquire release type semantic thing that you give to it is not just a hint to the compiler to say, your compiler, you're not allowed to move these things around. It tells the compiler to put instructions or flags on the instructions that say, hey, CPU, you're not allowed to do reordering here.
32:01
And so one of the other things that we were doing while we were like working on this was we were deliberately, although we don't run on any architecture other than x86, we were using the presence and the particular semantic ah interpretation of the presence of these serializing barriers or fences or loading instructions that have like particular properties about reordering that can happen.
32:21
oh Okay.
32:22
as a litmus paper, a litmus test for not only do we observe the code to be correct on x86, but we observe the kinds of loads and stores to be the right kinds of loads and stores on ARM that says, hey, the compiler knows that it absolutely has to have let this increment happen before any of the the following code it completes.
32:36
Right.
32:43
Right, right, right, right. Yes. ha This is sort of like telling telling somebody something and then being like, OK, now explain it back to me.
32:51
I suppose it is.
32:52
Right. And to making sure that they understood what you said, it's like you're going to put these things in there and see if the compiler reorders things in a way that doesn't make sense.
32:53
Yeah.
33:01
And you're like, well, OK, maybe maybe we don't we're not on the same page here about about what ah instructions need to be ordered.
33:09
Well that's absolutely it.
33:10
Yeah.
33:10
Now obviously there's still no guarantee that that's right and it's still no guarantee that while the compiler has emitted those correct instructions that it still wouldn't
33:19
move things around above and below those instructions because it feels it has that degree of freedom. I can't introspect into the compiler and look at each instruction and say, given free reign, would you move this up further?
33:29
Right. Yeah.
33:30
But that would be an interesting thing to do. So it has been an interesting few days. I mean, and the the performance you get out of this is pretty astronomical. It's like three on x86. It's, you know, single clock cycles to read and write to something that is, you know, 16 bytes long.
33:39
Oh wow.
33:44
Now,
33:45
Yeah.
33:45
For something that's eight bytes long, you can just automatically write the whole thing and be done with it, and you don't need any kind of sequence lock. But if you've got 16, 24, 32, even up to 48 bytes, a single cache line's worth of work, this is still essentially free.
33:52
Mhm. Mhm. Mhm.
33:59
um free it's all It's all relative in our world. So it's been a super fun thing um to the point where actually, you know, someone was reviewing the code and saying, you know, well, you say it's single threaded writer.
34:10
Can you detect the case where there are two writers and, you know, throw an exception or blow up or, you know, something like that? And it's like the answer would be yes, I can. But it would add something in a region of twice the overhead, because just even the check and the read and the whatever and the out of band jump is is actually noticeable at this level.
34:23
Right.
34:28
It's pretty minimal. Don't get me wrong, it is pretty minimal. but And also what what you're going to do at this point, you know like yeah it's it's a bit late if you've got two writers, but it's nice to have in debug. So that's what we've done as we put it in ah as a debug check.
34:38
Yeah, yeah.
34:38
and then But it's been a really fun and interesting journey. And in doing so, we've discovered a whole bunch of interesting other techniques that you can use to do to have something that have like different trade-offs between back pressure on the writer versus readers having free rein.
34:53
Mm hmm.
34:54
The classic example of something that's that's a little bit more fair for the readers is a reader's writer lock, like an actual lock in this instance, where
35:02
Okay.
35:03
the The lock is an atomic value and like the bottom 16 bits or the bottom 32 bits are how many readers are currently reading and the top bit or however many bits is like, is there a writer? And then the readers in order to acquire a lock, they have to do some work here. They acquire a lock and they basically ah try to atomically replace the value with one higher.
35:25
Mm hmm.
35:26
And if it fails, they'll get it was part of the the atomic switch operation that the CPUs can do.
35:30
Mm hmm.
35:31
You get back what the actual number was and you can see whether or not the writer was there. And if the writer was there, you know, all bets are off. But if you were able to increment it, okay, then you're now you've let the writer know that there's at least one reader.
35:43
And so that's what all the readers and you can have like 10 readers all reading from the same thing and that's fine. They can all be reading from it. And then the writer has to do the other thing where it says, I need to be able to to replace a zero. Like literally there are no readers and no writers with a top bit set value.
35:55
Right.
35:55
And if I'm able to automatically do that operation, then I've locked out all of the readers and I've got the the lock and I can spend my time writing to it.
36:03
Yeah. So just to, just to clear, clarify here, when you say there are, you know, you've got these two halves of this, in the upper bits of the number of writers and the lower bits of the number of readers, you're saying the number of things that are currently reading or currently writing.
36:05
Yeah, sure.
36:17
Correct, currently reading and writing.
36:18
Not like the logical number of readers and writers in the system.
36:20
No, no, no, no, no, this is a current thing which allows them and so yes, yeah, yeah.
36:22
Yeah. Yes.
36:23
And at that, so at that point, what you're doing is you're actually preventing the data race.
36:24
Yeah.
36:28
So that's an aspect we glossed over earlier is that strictly by the book, taking that copy of the data while it was unprotected by any lock is
36:37
Mm hmm.
36:37
not allowed and by the C++ standard. It's an undefined behavior to read a value that's being updated. You know even though we don't look at it unless we know that we've got a good copy, that's not okay by the C++ standard.
36:43
Right.
36:47
Right.
36:48
There's some papers out there that are trying to canonify it and come up with ways to make it like okay to do it, but ignoring that for now.
36:51
Mm hmm. Yeah.
36:55
But um yeah, coming back to this thing, in this particular instance, no, we are actually preventing any kind of data race because ah the readers do take out the lock for the tiny amount of time that they're taking their copy or working on it, and then they reduce the lock.
37:04
Mm hmm.
37:07
So it's a number that now it's not the number of readers, it's the number of readers that are currently reading and locked it.
37:11
Yeah, yeah, yeah.
37:13
But they obviously, you can have as many readers as you like, reading from it. And that's potentially useful, again, if it's some kind of shared piece of data, like, you know, this is the, again, the Linux kernel kind of example would be, hey, this is configuration information about which time zone everyone's in.
37:24
Right.
37:28
ah You know, there are many processes that are getting the time of day all the damn time, we never really, you know, we want to
37:33
right
37:35
allow them to make progress most of the time. They don't have to lock each other out.
37:38
Right, right.
37:38
But there's only one writer and it's the kernel. And every now and then he needs to be able to go in and say, Hey, someone just did a sys ctrl and change the time zone or whatever it is, whatever kind of shared information.
37:45
Yeah, right.
37:47
Of course, the problem with that is that the readers, if there are enough of them can easily starve out the writer, you can get a situation where the writer can never actually
37:55
Oh, right. Yes.
37:56
And so then it gets more sophisticated where you start using more bits to say, hey, there's a pending writer, and then the readers aren't allowed to get a lock if there is a pening pending writer, even though, you know, and so on and so forth.
37:56
Uh huh.
38:07
So these things are complicated, but they're fun. They're fun and interesting. But at least that case doesn't suffer from the data race, the sort of strict data race by the book um ah version there.
38:15
Mm hmm.
38:18
But yeah, testing this has been essentially an exercise in all the things we talked about, looking at the disassembly, writing some simple ish tests. And I think ah in one of our earlier episodes, you made a ah comment about, you know, these kinds of tests have to be something like, you have to have seen it work once.
38:35
Oh, yeah, yeah, yeah, yeah, right. Well, this I so I had made the argument in an earlier episode that, um you know, write all the unit tests you want, and I do, but write all the unit tests you want.
38:48
But if you've never actually seen this thing work in production, you have no reason to believe that it works, right?
38:53
Right.
38:54
And the intention there is to, again, as we were just kind of talking about, you're passing on an opportunity to prove yourself wrong. And sort of moreover, like, it's one thing to say like, okay, I wrote this, you know, lock free queue, and I pushed a bunch of data into it, and I pulled a bunch of data back out,
39:15
And I've tried that a number of different ways with different data sets on different architectures and all these things trying to prove that it doesn't work. But if you never even tried the base case of like, did it ever work at all once?
39:26
Right, then that's true.
39:27
And that's just, you're just being silly, right? You're being silly on purpose. But ah you know that that sort of rule of like, have you seen it work once, I think is is more applicable for things that are more mundane, um that are like,
39:42
um you know, pretty, pretty easy to get confidence that they work with just unit tests. And the only question is, do you have enough unit tests?
39:51
Mm-hmm.
39:51
Right. It's like, if you've never seen it work once, then, you know, you don't really, there's no reason to believe that you have enough unit tests essentially. Right.
39:59
Yeah.
40:00
But this is a whole other thing, right? This is like, you can see it work a thousand times and still not be confident that it's going to work the thousandth and first.
40:06
That's, that is, I think the material difference.
40:07
Right. Yeah. And you got to get much more creative about how to create that confidence. Um, because unit tests alone and a little bit of exploratory testing, you know, watching it work one time or two times or four, five times is not going to be enough.
40:24
Yeah, no, it's really, interesting I mean, so when but I put out that we had four different engineers review the, um and this definitely passes the test of, of you know, ah like as in in general, I don't write comments in my code.
40:30
Yeah. yeah
40:37
I think I've said this before, I prefer to have explanatory names for things where it makes a lot of sense. And so, you know, rather than manifest, I mean, most people and a lot of people feel this way.
40:49
You know, you put sensibly named intermediate values or or variable names, and then it's like it explains itself because essentially the last line is a piece of English prose, right?
40:54
and Yeah.
40:56
That's great. Return, interest rate, times, whatever, you know, there you go.
40:58
Yeah. Yeah.
41:00
They don't have to explain it. But this code has somewhere in the region of 50 lines of non-comment, and it's a four two or 300 lines of header.
41:03
Yeah.
41:10
Right?
41:10
yeah
41:11
It's like the ratio is obscenely the other way, because there's a huge set up explaining how the general thing works.
41:16
Right.
41:17
Then above every single line, there's my reasoning why it's okay to have it in this particular sequence in order, why there are these flags set, why it which are the lines of the code it it kind of interlocks with in terms of if the other thread was doing this thing on this other thread.
41:27
Mm hmm. Mm hmm.
41:30
But um yeah, so anyway, it went to review. Four engineers looked at it, and all of them so far have said, yes, it looks good to me. um But one of them brought up a really interesting point, which was like, you know there are formal verification systems that for multithreaded programming that you can use, um where you describe in a sort of DSL the operations that you're doing, and then a sort of theorem prover runs and shows that there isn't a a window of opportunity where things are wrong.
41:56
And so, for example, you know if you're if you're designing, I don't know, the Paxos protocol or the Raft protocol or whatever, then this is the kind of thing, if you can explain your ah your protocol algorithm to this system, it can say, yes, I can prove that there exists no opportunity for this this to give you the wrong answer.
42:14
Yeah.
42:14
And that was a really interesting case, ah interesting observation. But unfortunately, they don't run on C++, they run on the description of the system.
42:24
Right, right.
42:25
And that's where I fell over in terms of like, I don't know that this is going to help me here because I quote, know that sequence locks work in theory because they're in the Linux kernel and I've got five different on, you know, there are five different open source things that have very wildly different um atomic operations in them.
42:40
And so I was like, no. Let me try and do reason this from first principle so that I can defend why we're using it doing it this way. But I would have no confidence that any explanation of what I had done to a theorem prover was ah a good representation of what the C++ semantics were.
42:57
It's the difference between the algorithm and the implementation.
43:00
Exactly right.
43:00
You can prove that the algorithm is correct using the theorem prover. To prove the implementation is correct is much more work.
43:06
Yeah.
43:07
um And the other thing is, you know and we mentioned this at ah at the start of the podcast, we were like, well, assume no latency constraints.
43:15
Right.
43:15
And this problem gets a lot simpler. When you add in the latency constraints, you know you were talking about the sort of ratio of code to comments here.
43:21
Yeah, yeah, yeah, yeah.
43:23
It's like the square of the comments now because you have the dimension of thread safety and the dimension of latency, and you're doing both of those things at the same time in the same code, right?
43:32
Yeah.
43:35
um And that's gonna make it much, much, much harder um because you know the the cycle time at least, it's like, okay, we figured out some way to improve the performance by another 30%, and it's like, okay, and how do you know that this hasn't completely broken your threading model?
43:40
Yeah.
43:54
Well, we don't, so we get to do all of that testing all over again, yes.
43:56
All that, again, yeah, that's the thing. I think you know like one loses the the warm, fuzzy feeling that like you can touch this code in any way and not go through some non-automated handoff.
44:04
Right.
44:08
you know Obviously, I will check in the test and the the smoke test that runs for five seconds and make sure that like nothing overtly stupid happens will remain. And I'll leave that in.
44:16
Yeah.
44:17
but i i can't Or I don't want to think about um having like oh yeah this is it we run it through you know compiler explorer effectively i've actually been using the site to do something for a change ah through his six different architectures and then it should exist that there should be no ldr of this ver variable before the ldar that's the serializing load blah blah you know that kind of stuff that one could go as far as that but i don't know that that wouldn't be so brittle that it would yeah i don't know actually it maybe that is the way to do it i don't know if there's some
44:41
Yeah.
44:46
Yeah. Well, the the golden rule of all of these things and especially if you have something that is this like thread safety and latency sensitive is design it in a way where you won't have to change it, right?
45:01
Like you're gonna do all this upfront costs and all this really hard work
45:02
ah
45:05
very
45:06
to get this code just exactly the way that you want. Make sure that you encapsulate it really well so that the chances of you having to change it are as minimized as you can make it so that you can just kind of leave it there and then you know go in to git five years later and be like, yep, this code hasn't changed in five years.
45:15
That's very... Yep.
45:22
It's been working for the five years. That's as confident that I'm ever going to get in any piece of code like this that it actually works.
45:27
So please don't change it, right?
45:29
That is a very, very good observation. In fact, one of the, one that of the review comments was I had had ah an interface that was slightly more open-ended and you could do some cool things with it because I could, I gave up this, hey, yeah, you know, rather than just replace the value, I was like, well, the reader can, the the sort of the writing thread can at any time read it, it knows no one else has changed it because it is the only writer.
45:37
Mm-hmm. Yeah, yeah.
45:48
Mm-hmm.
45:49
So like you could actually update it anytime you like by looking at the current value and like maybe incrementing it rather than replacing it.
45:55
And then one of the reviewers was like, yeah, I don't think, I'm a user of this, I'll be a user of this, and I will never want to do that.
46:01
Yeah, yeah.
46:01
And so I took out that interface just like, okay, you've got get, and you've got set.
46:05
Right.
46:05
and And maybe just maybe I will have get, but I promised Scouts honor that I am the ah writing thread.
46:06
Mm-hmm.
46:12
And as such, I'm allowed to get it without any kind of lock promise, promise, promise function that will allow the other access type.
46:14
Mm hmm.
46:20
But Yeah, it's, um, it's been an interesting few days. And, uh, yeah, I figured it would be a fun thing to talk about. And I said to you, this is probably going to be a short one. I've just looked at the timer. And if we haven't lost absolutely everybody with my attempt to try and describe multi-threading memory ordering problems, then I don't know what will get rid of listeners.
46:43
We're trying our best here.
46:45
Yeah. No, no, this has been fun. Thank you for letting me talk about my excite, my pet project.
46:48
Oh, no, this is this is super cool. I love this stuff. I'm a big fan. I love this.
46:53
Well, maybe next time we'll ah we'll talk about something a little bit more human-y again.
46:58
I don't know. We can talk about more of this. I'm I'm down with that.
47:00
All right.
47:02
That's.
47:02
That's cool. Actually, yeah we have we have done a lot of human-ing recently, so we're probably due a bit more tech content.
47:05
Yeah, right. Uh huh.
47:07
Although, you know and to to the extent that folks have contacted us, and we encourage you, you know you can contact us at the... the, the mastodon hackyderm dot.io or whatever it is and or email, you can find our emails.
47:15
Mm hmm.
47:19
um We always love to hear from our listener.
47:21
Mm hmm, mm hmm.
47:22
Um, and you know, folks were very, you know, positive about some of our more human focused and less tech focused stuff, but you know, we can get back to tech as well.
47:29
Yeah.
47:30
That's cool.
47:30
You know, I take the philosophy that um the human elements of software are inseparable from the technology technological elements.
47:37
Yeah.
47:38
They are sort of one and the same thing. It's hard to see that, but it really is true. And if you want a great example of this, take a code base that was written by other human beings and give it to a different set of human beings and then watch them.
47:48
Oh, yeah.
47:52
Oh, my gosh.
47:52
And if that doesn't convince you that, that technology and and people are intimately intertwined, I don't think anything will.
48:00
We really do need to stop. But ah Kate Gregory has an amazing ah keynote speech where she talks about the empathetic programmer, or something like this.
48:07
huh hu
48:08
And it's it's like all about that. It's like, you know, you should have empathy, because you know, who's going to be reading this code is going to be your friends, your colleagues, and also probably yourself.
48:15
Yeah.
48:16
And obviously,
48:17
Yes. Future you who has slept since then and has no memory of what any of these things do.
48:17
as well. no Yeah, exactly. All right, we should stop now. But this has been fun. Thank you very much. And I guess we'll see you next time.
48:28
Yep, until next time.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More