Previous | Next --- Slide 13 of 23
Back to Lecture Thumbnails
Elias

The slide poses the question: why is there no invalid state?

The answer is straightforward, but fundamental. This protocol is an update protocol, as opposed to being an invalidation protocol. When the machine starts up, the cache is empty (and this presents a transient problem in this diagram - really, there's a fifth "startup" state). As requests come in, eventually the cache is full, and then for each cache line, one of these four states is applicable. At that point (and forever onwards), any line in the cache is valid - there is no such thing as an invalid line, in an update protocol.

tl;dr: The concept of an "invalid state" is fundamentally at odds with the concept of an update protocol. So there isn't one!

kayvonf

@Elias. That was a really solid answer. Great comment.

vrazdan

I think I'm still missing something, as I'm confused where the actual communication for if a line is being shared or not is coming from. We make the distinction for processor operations by having (with sharing) and (no other sharing), but I don't see where that information is currently coming from and how it is getting sent to the relevant cache so that it may enter the proper state?

Elias

Caches communicate with each other. Remember that the cache is built by hardware designers - the processor is not the only thing in a computer with logic built in. The processor actually doesn't come into play here - the caches maintain their own state, and communicate with each other (via the interconnect).

kayvonf

@Vrazdan. Remember, in all of the cache coherence protocols, events that may induce a state transition may be LD/ST requests from the cache's local processor, or messages from other caches. This state diagram (or any of the invalidation-based schemes from the previous lecture) describes the logic performed by each cache in response to any of those events.

For example in the Dragon protocol above, when a BusRd transaction is placed on the interconnect by another cache, all the caches have to respond with information about whether they also have a valid copy of that time. If one of the caches responds with "hey, I'm sharing this line too", the line is loaded by the requesting cache into the Sm state. If no caches indicate they have a copy of the line, the line is loaded by the requesting cache into the E state.

[Edit: I meant @vrazdan, not @elias]

BigFish

Can someone help explain when will SM receive BusUpd? I think only one cache can be in SM state, and no one else can be in SM or M. So how can it receive BusUpd from others?

kayvonf

@bigFish. Good question.

Remember, this is an update-based protocol, so invalidation is not required.

Consider cache 1 in the Sc state that wants to write (it just got a request for its processor to write). Assume cache 2 is in the Sm state. Sm means that the line is shared, memory is NOT up to date, and that this cache is responsible for updating memory if the line is ever evicted from cache 2. Basically what's going to happen is that cache 1 (currently in Sc) cache is going to move into Sm, and cache 2 (currently in Sm) is going to move to Sc.

So:

  • Cache 1 in Sc issues BusUpgrade indicating an intent to move into Sm and update the line with the value v.
  • Cache 2 in the Sm state sees the request and modifies its line with v, then moves to Sc, then gives a nod back to cache 1 confirming its seen the request.
  • At this point the write has committed and Cache 1 can move to Sc.

Of course, all the logic to move the caches between states must be atomic. (Details of how to ensure this were not in scope for the lecture.)

BigFish

Oh, I see. So this means that Cache 1 first tries to update cache lines in cache 2. If this is successful, then cache 1 move from SC to SM and Cache 2 move from SM to SC. Another question is that if other caches contain the cache line, does Cache 1 must move into SM first before the cache line in other caches can be updated? (Since currently cache 2 is in SM state).

kayvonf

@bigfish: "Cache 1 first tries to update cache lines in cache 2"...

Hold on here... cache 1 and cache 2 BOTH have copies of the line. Both the Sc and Sm state are "shared states" in the protocol, similar to how S is the shared state in the MSI and MESI invalidation-based protocols we discussed.

The Sm state simply means that this cache is responsible for updating memory when the line is evicted. (that the "m" for modified, in "Sm"). Caches in Sc can assume memory is up-to-date with the state of the line in the cache.

So there's no "IF". Cache 1, in Sc state wants to update a line it already has in the Sc state in the cache. It does so using the procedure I listed in bullet points above. As part of that procudure, the protocol says that cache 2 also has to take action to maintain the invariants of the protocol as defined on the previous slide.

BigFish

So the BusUpd here just means cache 1 tries to change its state from SC to SM, but not actually a data update.