Previous | Next --- Slide 18 of 37
Back to Lecture Thumbnails

A possible middle-ground solution is to maintain a granularity where if the list is above than the granularity, implement a lock for the entire list instead of for each node so that the overhead of taking a lock for each traversal step is reduced; otherwise, implement fine-grained locking. This dynamic implementation of locks handles the tradeoff between parallelism and overhead.


As I can remember, if all locks are acquired and released in the same order, then deadlock will not occur.


@bochet! Correct!


Some students proposed ways which do not need requesting and freeing locks during traversal. This doesn't work because you don't know whether a pointer to a node is still valid or not in C++. I believe the proposed method can work with a little modification in garbage collection enabled languages like Java. I also believe it works in C++ if we implement the basic garbage collection by memory pools and epochs. I also find the idea behind some lock-free algorithms (e.g. reclaim) is quite similar.


@bochet, Why do we need to release the locks in the same order? Does it matter?


As long as the order of acquiring locks is not broken, the order of releasing the locks will not matter.


@firebb take for example this pseudocode:

thread 1:

lock(mutex1); lock(mutex2); /critical section/ unlock(mutex2); unlock(mutex1);

thread 2:

lock(mutex2); lock(mutex1); /critical section/ unlock(mutex1); unlock(mutex2);

Now what if thread 1 locks mutex 1 and thread 2 locks mutex 2? We would have a deadlock. I think we lock and unlock mutexes in a single order to avoid situations like this.


@shpeefps Isn't this changing the order of acquiring locks as well? If both threads acquired the locks in the same order (mutex1 then mutex2), there couldn't be a deadlock since thread 2 couldn't lock something that thread 1 is about to, and thread 2 can only acquire the locks once thread 1 relinquishes ownership of mutex1, regardless of when that may be.