Previous | Next --- Slide 13 of 31
Back to Lecture Thumbnails
regi

One middle-ground solution is to use a lock for every n nodes. This reduces the storage overhead while allowing for parallelism on inserts and deletions among different sections of the linked list.

ak47

We would have to be really careful if we use a lock every n nodes, because deletes can cause those n nodes to start overlapping. You could maybe let the block sizes be flexible you could guarantee overlap. But in that case, if you frequently inserted and deleted in the middle it's not obvious how to make a sustainable algorithm. It seems an adversary could guarantee some blocks shrink to size zero and others grow to huge size.

grose

One solution might be to have a global array of locks, where locks[0] is the first N nodes, locks[1] is the second, etc... They aren't tied to specific nodes, just to the indices of nodes.

Question: What is the problem with this approach?

ak47

Couldn't a thread lock a node with lock [i], then some nodes get deleted so that the locked node is in the i-1 block, so another thread locks the original node with lock[i-1]...two different threads think they have exclusive control of the same node.