I'm not sure I understand the quantifiers in this definition: in the previous examples using a lock, there was always some thread that was making progress (that is, the thread that holds the lock). So there exists SOME thread that is making progress. Do you mean EVERY thread must always make progress?
@gritz The OS has no concept of a "lock", so it may remove the thread holding the lock for scheduling reasons. Then the only thread that can make progress, the lock holding thread, is not making progress and neither are any of the other threads since they have no locks.
In the lock-free case, we will always have the first thread to do the CAS make progress, so there's no worry about indefinite stalling.
@TomoA Oh, I see. The word "progress" here means not progress as in "is executing instructions" but rather "is making progress in pushing or popping?"
@gritz Yes that is the case.
My understanding has always been that the difference between locking and lock algorithm is universal versus existential quantification:
- There exists a thread which can make progress
- All threads can make progress
But I'm not sure how I reconcile that with the text presented here.
@PIC Consider the lock free linked list implementation. If multiple threads want to insert to the same spot, then only one of the threads can successfully do a CAS, and the others will not progress at all. However, we can always guarantee that the thread that wins the CAS race will make progress.
In the locking linked list case, if multiple threads want to insert to the same spot, there's going to be only one thread that holds the lock to the node we're looking at. Now, if the locking thread is delayed for some reason (OS switches it out, page fault, memory load, etc), then there aren't any threads making progress at all.
This blog post might clarify some of the definitions.
@everyone: "progress" should be interpreted as running on a processor and successfully completing an operation on the data structure.
In a blocking implementation, a thread with a lock can get swapped out by the OS (indefinitely) and thus no thread is making progress in operations on the data structure.
In a non-blocking implementation, there's always a thread that can complete its operation on the data structure. For example, in our non-blocking stack, that's the thread that successfully completes the next compare_and_swap. If a thread is in the middle of performing a pop, and it happens to get swapped out, the fact that it's in the middle of an operation won't prevent another thread from successfully completing a push or pop operation.
I understand what you're saying, I just don't see that reflected by the text on the slide.
I think that my confusion is because the slide is only considering threads which are currently running. It's saying of threads running at any given time one of them is guaranteed to make progress.
Does non-blocking algorithm and lock-free algorithm mean the same thing?
@haibinl - No, non-blocking is a weaker condition than lock-freedom. Non-blocking means that the failure of one thread cannot cause failure or suspension of another thread.
Lock-freedom is a bit stronger. An algorithm is said to have lock-freedom if after some finite period of time, some thread makes progress.
There's a guarantee called obstruction-freedom, which says that, if you fix some thread and pause all the other threads, that thread will make progress.
I guess you can have an algorithm that is non-blocking, obstruction-free, but not lock-free. (Though I can't think of one...)
(Also, it's worth noting, I'm mostly reading this info from wikipedia and the slides, so I'm a noob like everyone else :P)
@haibinl: An example of non-blocking synchronization that is not lock-free is spin-waiting. If a thread fails to acquire a lock, it will loop until the lock is released. Since it's actively looping, it's considered non-blocking synchronization. Spin-waiting involves locks, so it's definitely not lock-free.
Edit: This is actually incorrect based on how we defined blocking (see randomthread's post below).
@BensonQiu On the previous slide we defined blocking as allowing one thread to prevent other threads from completing operations on a shared data structure indefinitely. Even though a thread is actively looping it can still be indefinitely prevented from operating on the shared data structure. So I think spinning would be considered blocking despite the active nature of how it waits.
Edit: To clarify, when I referred to looping/spinning I meant spinning on a lock waiting for it to become free.
You're describing a concept typically referred to as wait free.
non-blocking: thread can make progress independently, does not have to wait for completion of other threads;
lock-free: at least one thread is making progress, does not exist block situation, but might complete in unbounded steps;