Previous | Next --- Slide 5 of 30
Back to Lecture Thumbnails
Q_Q

A quick optimization that can be had here is to move the "unlock(list->lock);" above "delete cur;" in delete(), so that it looks like this:

if (cur->value == value) {
    prev->next = cur->next;
    unlock(list->lock);
    delete cur;
    return;
}

since delete is implemented using malloc's free(), it could be that free() takes a long time, so it would be worth it to allow other linked-list deletes to occur while free() is occuring. Also, if malloc is multithreaded, this allows us to take advantage of that.

yetianx

@Q_Q: I think your optimization breaks exclusivity and sometimes it will release the lock before deleting, leading to a corrupted linked list.

jinsikl

@yetianx I don't think so. By the time delete cur is called, the node pointed by cur is no longer in the list. So it's safe to release the lock before calling delete. Of course, we have to assume that delete is correctly implemented to handle concurrent execution by multiple threads (but that seems like a pretty safe assumption to make).

traderyw

In certain cases when not too many threads are competing for resources, using a global lock data structure is faster than using a fine-grained lock data structure because a fine-grained lock data structure has more overhead.