Assume there are two processors P0 and P1. P0 has modified mem[X], so it's in State(M). p1 now wants to write on mem[X], so it send a BusRdX through the bus. After this action, I think there is going to a communication protocol between the two cache controllers because the cache controller of P1 have to wait for the cache controller of P0 notify it that mem[X] has been flushed to the memory then it is safe to read mem[X]. Am I correct? If so, how is this protocol implemented by hardware?
bochet
Since I can go directly to M with PrWr, I assume it's write allocate right?
kayvonf
Yes, this protocol assumes a write-back cache (and write-allocate, as it would make little sense to design a write-back cache that was not write-allocate).
unparalleled
The state diagram is per cache line. What that means is for every entry in the cache (assume direct mapped), the cache can be in a different state.
unparalleled
Also is there a possibility of race condition here arising due to the communication protocol. Say if caches of two processors P1 and P2 each possessing value X want to simultaneously update X. They shout out BusRdx as a result of PrWr. Assuming both were in the shared state with respect to X, what would eventually happen?
crabcake
I think race can happen due to unsynchronised access to the critical data. In the situation described above, If both processors shout out simultaneously, one of them will win and update it first. There is no guarantee on which on will win though.
whitelez
I am a little curious, will the processors broadcasting the occupancy through switch network like listen-before-talk mechanism?
nishadg
When you are using a shared communication (for broadcasting BusRd and BusRdx) how do you guarantee the execution order and the fact that the other processors have received and updated their caches according to your message?
life
This slide kind of reminds me of the tcp state transition diagram where the state of a particular tcp connection is maintained independently between the two parties.
Assume there are two processors P0 and P1. P0 has modified mem[X], so it's in State(M). p1 now wants to write on mem[X], so it send a BusRdX through the bus. After this action, I think there is going to a communication protocol between the two cache controllers because the cache controller of P1 have to wait for the cache controller of P0 notify it that mem[X] has been flushed to the memory then it is safe to read mem[X]. Am I correct? If so, how is this protocol implemented by hardware?
Since I can go directly to M with PrWr, I assume it's write allocate right?
Yes, this protocol assumes a write-back cache (and write-allocate, as it would make little sense to design a write-back cache that was not write-allocate).
The state diagram is per cache line. What that means is for every entry in the cache (assume direct mapped), the cache can be in a different state.
Also is there a possibility of race condition here arising due to the communication protocol. Say if caches of two processors P1 and P2 each possessing value X want to simultaneously update X. They shout out BusRdx as a result of PrWr. Assuming both were in the shared state with respect to X, what would eventually happen?
I think race can happen due to unsynchronised access to the critical data. In the situation described above, If both processors shout out simultaneously, one of them will win and update it first. There is no guarantee on which on will win though.
I am a little curious, will the processors broadcasting the occupancy through switch network like listen-before-talk mechanism?
When you are using a shared communication (for broadcasting BusRd and BusRdx) how do you guarantee the execution order and the fact that the other processors have received and updated their caches according to your message?
This slide kind of reminds me of the tcp state transition diagram where the state of a particular tcp connection is maintained independently between the two parties.