Previous | Next --- Slide 18 of 30
Back to Lecture Thumbnails
tchitten

In the DMV example, this lock would be analogous to going up to the teller and asking if they are busy. If they are, you leave and come back after X minutes and repeat the process. If they are not you get your turn.

There are multiple problems with this scheme however. From the DMV's perspective, when the teller is free, no one may be there to take their turn so throughput is low. From the user's perspective, if their return to the DMV does not line up exactly with when the teller becomes free, someone else will probably already have claimed the teller, so latency is high and fairness is low.

yetianx

A similar strategy is used to avoid conflict of sending packages to the wireless network. But in that case, an upper bound of delaying time increases in a exponential way. And the delay time is randomly chosen from [0, upper bound).

Maybe the random number generator is too expensive to use in the lock implement.

sbly

@yetianx I don't think that's the case, I think this is just a very simple example. I think randomly choosing a wait time for each processor would be a much better scheme, especially in terms of fairness. That way, locks that keep missing the lock wouldn't be punished; in expectation each processor would get the lock the same amount of times.

bstan

It seems as though exponential back-off for waiting is used to improve scalability by reducing busy-waiting. As noted before, it probably wouldn't be practical for critical sections with high contention. Here's an interesting link about scalable locks.

jmnash

At first it seemed to me like this implementation was pretty useless, since it has the same or worse performance characteristics in most cases as test-and-test-and-set. But then I realized that the type of lock used should depend on what is expected to happen in the program... For example, test-and-set with back-off would probably be better than both test-and-set and test-and-test-and-set in the case where low contention is expected. The latency is the same as test-and-set (better than test-and-test-and-set), there is less traffic and more scalability as in test-and-test-and-set, and the storage is the same. Although this implementation has the drawback of possible extreme unfairness, this is unlikely to happen in the case of low contention, because the process will probably acquire the lock after just a few iterations.

jhhardin

@sbly I don't see how a random number generator would be better than just a constant delay time. In expectation the traffic would be equally high in both cases, but fairness seems like it could only be worse with randomly chosen delays. I agree with @bstan and @jmnash that exponential delays seem like a good scheme to reduce traffic in situations with low contention.

nrchu

@jhhardin

If the critical section takes a very consistent amount of time, I could think of some (extreme) edge cases where the timing of the constant delay just so happens that a certain worker always asks for it at the wrong time for many iterations in a row. Imagine that the delay is about as long as the critical section computational time but this worker always asks for the lock in roughly the middle of the computational time, and contention is high so timing doesn't change much. Making it random avoids any such odd cases.