Previous | Next --- Slide 38 of 66
Back to Lecture Thumbnails

The deadlock can happen because the bus is the resource that different processors compete for, and only one processor can issue a request till it receives a response on the bus slide 27. So if P2 issued BusRdX on bus for cache line B, and P1 has most updated copy, if P1 doesn't respond to request, instead waits on bus for sending request for BusRdx for cache line A, a deadlock happens. P2 will not release bus till it gets cache line B (from P1), and P1 will not send cache line B till it gets hold of the bus to request read of cache line A. This satisfies all conditions of deadlock in slide 19


So the slide says that the way to avoid this is to have P1 be able to service incoming transactions while waiting to issue requests, which makes me wonder, is there a reason why all processors aren't made to do that? Is it too demanding, because if it isn't, it seems like something that should happen on all processors to avoid this kind of deadlock.