Couldn't NACKs be a source of livelock? That is, a request keeps retrying, but every time it does so the buffer is full, because other requests have been sent in the meantime? Is there a mechanism for preventing this? Or is this just basically unlikely enough that we ignore it?
This comment was marked helpful 0 times.
mchoquet
What you described sounds like starvation, not livelock. I'm not sure what mechanisms are used to prevent this, but I bet caches retry so frequently that it isn't an issue.
This comment was marked helpful 1 times.
RICEric22
If the other requests filling the buffer are currently making progress, then that is in fact starvation and not live lock. It is only live lock when no progress is being made by everyone.
This comment was marked helpful 0 times.
sbly
Oops you guys are right! It's starvation, not livelock. It still seems like that's a problem though.
This comment was marked helpful 0 times.
jinsikl
@sbly Yeah, I was also thinking that the NACK might cause a starvation. I think a simple way to solve the problem is to keep track of an int, NEXT , denoting a cpu. We can imagine NEXT starting as 0. Whenever the buffer fills, NEXT gets incremented (wraps back to 0 if it goes beyond the number of processors). Then once the buffer has room, the cpu denoted by NEXT gets the highest priority making the next request.
So worst case, a processor will get a NACK equal to the number of processors. And this NEXT variable doesn't need to be centralized either. It can be kept updated on each processor, just like the table.
This comment was marked helpful 1 times.
mamini
I think negative acks can we used as a solution for the conflicting requests. This might help increasing the efficiency, by knowing immediately that a request to (say x) can't be completed right now. This way the processor which sent the conflicting request, can do some useful work instead of remaining idle, and then it could send the request again.
Couldn't NACKs be a source of livelock? That is, a request keeps retrying, but every time it does so the buffer is full, because other requests have been sent in the meantime? Is there a mechanism for preventing this? Or is this just basically unlikely enough that we ignore it?
This comment was marked helpful 0 times.
What you described sounds like starvation, not livelock. I'm not sure what mechanisms are used to prevent this, but I bet caches retry so frequently that it isn't an issue.
This comment was marked helpful 1 times.
If the other requests filling the buffer are currently making progress, then that is in fact starvation and not live lock. It is only live lock when no progress is being made by everyone.
This comment was marked helpful 0 times.
Oops you guys are right! It's starvation, not livelock. It still seems like that's a problem though.
This comment was marked helpful 0 times.
@sbly Yeah, I was also thinking that the NACK might cause a starvation. I think a simple way to solve the problem is to keep track of an int,
NEXT
, denoting a cpu. We can imagineNEXT
starting as0
. Whenever the buffer fills,NEXT
gets incremented (wraps back to 0 if it goes beyond the number of processors). Then once the buffer has room, the cpu denoted byNEXT
gets the highest priority making the next request.So worst case, a processor will get a NACK equal to the number of processors. And this
NEXT
variable doesn't need to be centralized either. It can be kept updated on each processor, just like the table.This comment was marked helpful 1 times.
I think negative acks can we used as a solution for the conflicting requests. This might help increasing the efficiency, by knowing immediately that a request to (say x) can't be completed right now. This way the processor which sent the conflicting request, can do some useful work instead of remaining idle, and then it could send the request again.
This comment was marked helpful 0 times.