Memory and cache both can provide the correct data since processor holding the E or S state of that cache line have not modified it. But which one is faster?
williamx
I think accessing from cache is often faster because many applications are bounded by memory bandwidth and latency.
atadkase
Consider the following situation:
1. Processor A has a copy of X in the shared state.
2. Processor B modifies X. This forces A to change X to invalid.
3. Processor C issues an intent to read X.
In this case, is it possible for the cache of processor A to just tap into the bus and update its own copy of the variable (resulting into the shared state) while the entire flush sequence is going on? (As a possible optimization?)
Memory and cache both can provide the correct data since processor holding the E or S state of that cache line have not modified it. But which one is faster?
I think accessing from cache is often faster because many applications are bounded by memory bandwidth and latency.
Consider the following situation: 1. Processor A has a copy of X in the shared state. 2. Processor B modifies X. This forces A to change X to invalid. 3. Processor C issues an intent to read X. In this case, is it possible for the cache of processor A to just tap into the bus and update its own copy of the variable (resulting into the shared state) while the entire flush sequence is going on? (As a possible optimization?)