Previous | Next --- Slide 18 of 29
Back to Lecture Thumbnails

A processor trying to acquire the lock modifies the l->next_ticket variable. But no other processor is reading this variable and hence its invalidation does not affect other processors. On a lock release, l->now_serving is written to and this causes each processor waiting for the lock to invalidate the cache line. This will result in a read request by each of these processors. However, in an efficient implementation of cache, the result of the first memory read for l->now_serving can be snooped by all other processors.


A comparison of the ticket lock with test-and-set:

Latency: Higher

Traffic: Less (only reads while waiting for lock)

Scalability: Higher

Storage: Low

Fairness: More fair because of ticket "number" ensures lock will be acquired in chronological sequence.


The best thing about this lock I think is not just cache coherent performance, but that this guarantees fairness. The spinlock will be FIFO, so the oldest waiter will get to run first. You can do this all with just one int and not a queue!