Previous | Next --- Slide 8 of 34
Back to Lecture Thumbnails

To clarify, this is deadlock because P1 is waiting for the bus, but another processor is waiting for P1 to respond to BusRd for X. Since the bus is busy because the BusRd for X is pending, and P1 needs to respond to make the bus available, P1 is waiting on itself, which is a circular wait.

Mutual exclusion occurs because only one processor can be using the bus at one time.

Hold and wait occurs because the processor which issued the BusRd for X is holding the bus, and waiting for P1 to respond.

Preemption cannot occur because P1 is not capable of servicing the incoming BusRd request while also waiting to issue its own request. Also, the BusRd for X is not preempted because the processor which issued it cannot magically cancel the request.

We fix the issue by allowing P1 to preempt its own request.


At an abstract level this seems like a problem --> we set up a condition where P1 has to wait to send a request and also set up a condition where P1 has to do something while waiting.

While this may seem like an issue, when it comes to actually implementing an efficient "waiting" protocol, it seems like one would store P1's request until the bus tells P1 it can send the request, rather than have P1 loop until the bus is free. This event based model I've just described would obviously allow P1 to service other requests quite easily.

I am not sure if this is particularly easy to implement in hardware, but I'd imagine an agnostic loop that simply polls for ANY kind of update (rather than a specific "CAN_USE_BUS" signal) would be a good first attempt at the problem.


To elaborate on @bwasti's idea, I don't see why we can't simply have another wire as part of the bus, which is triggered by the bus itself, to indicate that it is free. If the voltage on this wire is high, then the processors can make their requests and the bus arbiter will decide accordingly.