Previous | Next --- Slide 28 of 35
Back to Lecture Thumbnails
TeBoring

The difference between this and Slide 25: E state.

When some thread wants to read, it shout to all threads. If no thread care, this thread can go directly to E and it doesn't need to shout again if it wants to write later. If some one cares, this thread can just go to S. If it wants to write later, it needs to shout again.

I have a question: in distributed system, sometimes someone doesn't answer. Here, the current thread will get clear answers from all the other threads. Is that correct?

GG

@TeBoring. The "thread" here is different from what means in a distributed system. Here a thread is typically a CPU core, and I think it is reasonable to assume that the core won't crash and fail to answer during the transaction. But if you apply the protocol to other system, I think you have to consider such situations.

kayvonf

@TeBoring: I want to make it absolutely clear that this slide is describing the operation of the processor's cache controller, not threads in a program. The state diagram above describes how the cache responds to operations involving a cache line in order to maintain memory coherence. These operations include: (1) Local processor requests for reading or writing to an address stored in the cache line. (2) Operations by other processors (more precisely, other caches) that involve the cache line. A cache controller learns of operations by other processors when remote processors broadcast this information over the chip interconnect.

monster

Here the benefit is that when a cache line is in E state, it does not need to send BusRdX on the bus and can directly execute PrWr.

yingchal

@monster, the other benefit is it decouples the exclusivity from the line ownership. so if the line is not dirty, copy from the memory is valid and can be loaded exclusively.