Previous | Next --- Slide 23 of 34
Back to Lecture Thumbnails
sbly

Couldn't NACKs be a source of livelock? That is, a request keeps retrying, but every time it does so the buffer is full, because other requests have been sent in the meantime? Is there a mechanism for preventing this? Or is this just basically unlikely enough that we ignore it?

mchoquet

What you described sounds like starvation, not livelock. I'm not sure what mechanisms are used to prevent this, but I bet caches retry so frequently that it isn't an issue.

RICEric22

If the other requests filling the buffer are currently making progress, then that is in fact starvation and not live lock. It is only live lock when no progress is being made by everyone.

sbly

Oops you guys are right! It's starvation, not livelock. It still seems like that's a problem though.

jinsikl

@sbly Yeah, I was also thinking that the NACK might cause a starvation. I think a simple way to solve the problem is to keep track of an int, NEXT , denoting a cpu. We can imagine NEXT starting as 0. Whenever the buffer fills, NEXT gets incremented (wraps back to 0 if it goes beyond the number of processors). Then once the buffer has room, the cpu denoted by NEXT gets the highest priority making the next request.

So worst case, a processor will get a NACK equal to the number of processors. And this NEXT variable doesn't need to be centralized either. It can be kept updated on each processor, just like the table.

mamini

I think negative acks can we used as a solution for the conflicting requests. This might help increasing the efficiency, by knowing immediately that a request to (say x) can't be completed right now. This way the processor which sent the conflicting request, can do some useful work instead of remaining idle, and then it could send the request again.