Previous | Next --- Slide 14 of 43
Back to Lecture Thumbnails
Fantasy

When number of threads waiting to run is more than the number of execution contexts, busy waiting is usually bad. For example, if there is only one execution context, the busy waiting thread will exclude the thread which actually acquires the lock from running. From the perspective of CPU, it is fully utilized. However, from the perspective of system, the busy waiting does not make any progress.

mperron

@Fantasy I don't think that is quite right. Consider the case where we expect the lock to be held for a very short period of time (10s of cycles). It is unlikely that the thread which took the lock was switched off the CPU while holding the lock. Even if there are more threads than execution contexts, we may still wish to spin lock because the thread will not wait long. That is, the cost of waiting on a mutex may be much greater than using a spin lock in many applications (Even if there are more threads than execution contexts)

Fantasy

@mperron Yes, I agree. That's what we learned during today's class. As said in slide16 we may prefer spin lock when schedule overhead is larger than the expected waiting time or when threads are fewer than execution contexts.

However, I guess the reason why we are taught "busy waiting" is bad is that it usually costs recourses without making actual progress.

0xc0ffee

I'd never really thought about the rescheduling overhead dominating the cost of waiting. Does anyone know of examples where this is the case?

Fantasy

@0xc0ffee For example, maybe the only instruction that threads want to execute exclusively is an add instruction which takes only one cycle. If it uses a traditional mutex lock, the thread will be scheduled out when it cannot acquire the lock and that context switch may take hundreds of cycles. Instead using a spin lock, it just busywaits one cycle for other thread and can continue to make progress.

althalus

To add on, a lot cases where spin-locks are used could be replaced with a delay function to put the context spinning to sleep to save on CPU execution time. This saves CPU time. However, spin-lock is more preferable if there are a lot of interrupts to the while loop.

chuangxuean

I have trouble seeing how this is different from the blocking method in terms of their respective scheduling onto the cores. Won't the thread that is spinning have to wait for another thread to modify condition X as well as doesn't that require taking this thread off the cpu to allow for another thread to run? Or does this assume that this thread is running on a multi-threaded core where the thread mapped onto the other execution context could potentially modify X?

CaptainBlueBear

@chuangxuean The difference between blocking and waiting in terms of scheduling is in the spinning case, the thread continues to spin in a tight loop until the while condition is false and keeps running on the core unless the OS scheduler happens to swap it out.

In the blocking case, the thread is immediately swapped out and stays out (marked as not runnable) until some other thread signals that the condition it's waiting on has possibly changed (in which case it's marked as runnable again and will be swapped in by the OS scheduler at some point in the future).

stride16

I would just like to reiterate the difference between the blocking and busy waiting terms. Busy waiting is a term that refers to a thread spinning on a value, waiting for it to change, but not making any other progress. On the other hand, in the case of blocking, if a thread is waiting on a lock, the processor switches out that particular thread.

nmrrs

It seems especially bad to repeatedly check the state of a lock when the threads doing so are on different cores of the machine/have different caches. This causes a lot of bus traffic for cache coherency that could be avoided if we descheduled threads instead.