Previous | Next --- Slide 32 of 66
Back to Lecture Thumbnails
eknight7

Could someone please explain this slide? For instance, I am confused about why we need a Snoop-pending line on the bus. The snoop mechanism should be handled by the cache, and the bus only acts as a highway of data transfer. Is there some extra logic on the bus to work for snooping requests?

Also, why are the shared, dirty and snoop-pending lines "OR" results from all processors? For the dirty line, I assume it represents a certain cache line is dirty by a processor and needs to be written to memory. Why would other caches see this info on the bus if they all are in Invalid state and don't care about the cache line (maybe none of them requested it?)

Updog

The three additional lines are basically a way for the caches to communicate information to each other about the status of a line in their cache. When a bus transaction occurs, all of the other caches will do a snoop lookup in their cache and OR the appropriate result onto the shared and dirty lines. Then they will set their snoop-pending to 0 to indicate they have responded. As an example, when a processor does a read miss, its cache will wait until the snoop-pending line is 0 (meaning all other caches have responded). Then it can check the shared line to decide whether to move into the shared state or the exclusive state.

Khryl

As shown in the last slide, the three additional lines can summarize the states of each cache and their response to the command. They not only tell the client cache what to do, but also guide the behavior of memory.