Previous | Next --- Slide 15 of 35
Back to Lecture Thumbnails

In this newer granularity of locking, we lock a part of the linked list (we could have locked a different number of nodes in another implementation). When we read, we take a lock on the node we are reading. When we delete, we take a lock on the node we are trying to delete and the node prev to it.


Doubly linked list fine grained lock code.

I think its fine with similar hand-in-hand locking as for singly-linked list. We don't need to worry about node-prev pointers because of the invariants we established for insertion and deletion here.


Question: There's a challenge to the students on this slide! Who can make tweaks to expose more parallelism?


I believe you can release the lock on cur sooner, like so:

// after while loop

if (cur) unlock (cur->lock);

n->next = cur;

prev->next = n;


You know no one can delete cur, because to do so, they'd need the prev lock, which you have. Further, you know they can't insert between prev and cur, because you have the prev lock.

Before, we wouldn't allow concurrent insertion between {prev, cur} and {cur, cur->next} (as previously, a thread needed locks for both elements surrounding the new elt, but only one thread can have cur->lock at once).

We now support this operation, as thread 0 can hold prev->lock and thread 1 can hold cur->lock and each can insert at the same time (as they release cur->lock before inserting).


@eknight7, but for a deletion, wouldn't you still need a third lock for the node after the element to be deleted to update its prev pointer?

EDIT: I guess you're right that we still only need 2 locks since we can't touch the node after the node to be deleted due to the second lock.


Why must we lock the list to get prev and cur node. Is it because other threads might insert before head?


@Khryl : Yes we don't want other threads to modify list->head


In the deletion code if we acquire cur->lock before doing unlock(old_prev->lock), would it cause any issues other than locking overhead?


@0xc0ffee, I don't think your answer will change the behavior of the code. The most significant limitation of the code is that when some thread is doing some operation to the linked list, no later threads can traverse beyond that thread. As a result, if we have head->n1->n2->n3->..., an insertion between {n2, n3} will never wait on an insertion between {n1, n2} due to a locked n2; and if an insertion between {n1, n2} is waiting on an insertion between {n2, n3} due to a locked n2, releasing curr->lock will not help because curr in {n2, n3} is n3.


@kayvonf, the best thing I can think of is to change the lock to a reader/writer lock (or utilize transactional memory). I am wondering what are the tweaks you were referring to?


To answer the question in the slide: We can release the lock on cur earlier. We can release it as soon as we come out of the while loop. This is because the delete code won't be able to delete the cur node as it would require the lock on prev node if it wants to delete cur node. Because the inserting thread holds the lock on prev therefore deletion thread will not be able to make progress.

I visualize this in a way that we need to have lock only on the nodes whose "world view" we are changing. In the insert code we are not changing the "world view" of cur node because in the Node structure there is no variable which can give information about what is happening behind it. If it was a doubly linked list then things would be much different.


@sasthana, I don't think your answer will change the behavior of the code as I have suggested in my previous answer.


@doodooloo are you suggesting we have different kind of locks (maybe 2 for each node, one "insert_after_this" lock and one representing "insert before this"), and so this will allow insertion between A->B and B->C simultaneously? We would lock the "insert before this" lock for B and in the first insertion and "insert after this" lock for B in the second insertion, since there isn't a concurrency problem. (for delete, I think we would need to lock both of these).


i feel the bottle neck of this implementation is operations for latter part of list must wait on previous ones. so would it help if in case of locked node, we check 2 nodes ahead, and the operation would not be blocked if there is no overlap with the one that owns the lock


Just to make sure, this hand-over-hand approach is not lock-free programming but find-grained locking, am I correct?


@yimmyz Hand-over-hand locking acquires locks on individual nodes of the data structure, so it qualifies as fine-grained locking. Because it still involves locks on the structure, it's not lock-free.


In general, I think this hand-over-hand approach can be generalized for most data structures (e.g. binary tree, B-tree, etc.). In database systems (15-415/615), we learnt how B-tree handles fine-grained locking with different lock levels (e.g. read/write lock), and the way each type of lock is handled is very similar to this.


Thumb of rules for when to lock what in a linked list like data structure

  • Always get a lock on the node before reading it
  • Always get a lock on the node before deleting it
  • Always get locks on nodes that pointing to the to-be-deleted node, after change the pointer of those nodes, release all locks, free the node (the node is not pointed to by anyone, so safe to be deleted without locking)

There are of course other issues to be considered for a specific data structure