Previous | Next --- Slide 6 of 30
Back to Lecture Thumbnails
ekr

Busy waiting could be considered bad because it might waste processor resources by stalling. However, sometimes busy waiting is preferable to scheduled blocking.

Some examples include when the resources are not being used, or when the overhead of schedule blocking would be more expensive than the cost of busy waiting.

Edit: Turns out this exact information is a couple of slides later in the presentation...

kayvonf

@ekr. Not a problem. It means you were thinking!

muyangya

One of the most common use cases of busy-waiting is spinlock. A thread trying to acquire the lock will simply wait in a loop while repeatedly checking if the lock is available. It is used quite heavily in linux kernel, simply because it is faster than blocking locks. It is implemented using "spinlock_t".

Whenever we want to use this lock, we can do things like this:

spin_lock(&mmlist;_lock);
list_del(&mm;->mmlist);
spin_unlock(&mmlist;_lock);

when spin_lock(spinlock_t *lock) is called, it will eventually call __LOCK(), which is defined as follows:

define __LOCK(lock) do { preempt_disable(); __acquire(lock); (void)(lock); }

when spin_unlock(spinlock_t *lock) is called, __UNLOCK will be called:

define __UNLOCK(lock) do { preempt_enable(); __release(lock); (void)(lock); } while (0)
yuel1

I have a question regarding the correctness of the spin lock. Once the condition is true, how are we ensured that only one of the threads proceeds through the critical section?

gryffolyon

Its because we do an atomic instruction. Atomics ensure that only 1 thread executes it at any given point of time. Lets say we have a lock variable set to 1 when we say the lock is free. At this time, we can have a 1000 threads wanting to grab the lock. Their way of doing is (one such way) could be exchange the value of the lock with a 0. This exchange operation is done atomically. Meaning, only 1 thread is allowed to complete its exchange at a time and only after its done, the next thread performs the exchange.

So even if we have a 1000 threads all wanting to perform this exchange, they are all serialized here. So the first thread which does an exchange gets the value of 1 in the lock and sets the value of the lock to 0. Now this thread since it got a 1 has the lock. All the other 999 threads does an exchange of 0 and get back a 0. So none of them have the lock as they get a value of 0.

So this thread with the lock enters the critical section and performs its job and then sets the lock value to 1 after its done so other threads can proceed.

Look at the slides from 15-410 Operating Systems class here for more details on atomics Synchronization

abist

There is a similar notion in embedded systems where interrupts are preferred over polling (similar to busy waiting) because of the same reasons.

VP7

So, in a nutshell, a high level gist of :

  1. busy waiting is the situation where each of the threads annoyingly ask for the availability of the lock continuously.(which means every thread who needs a piece of resource protected by the lock would be polling continuously for the lock)

  2. blocking synchronization on the other hand, the waiting threads get informed once the lock/ resources is available.

toutou

The spin lock is kind of ongoing querying in short cycles. So it is a busy-waiting method and only applies to those threads with short waiting time. Although it uses while() here to implement a spin-lock, a real lock should be realized by more low-level operations.

zhiyuany

Busy waiting is bad because: 1) It causes wasted energy use 2) It causes performance penalty when the loop exits because CPU detects possible memory violations and erase the speculative instructions in pipeline. This can be mitigated by using a 'pause' instruction inside the loop.