Invalidation-based protocols: When writing, the cache gets exclusive access to it, and all other caches drop the line.
Update-based protocols: When writing, immediately update all other caches that have that same value.
Pros: This will reduce cache misses since fewer lines are dropped.
Cons: High bandwidth requirement -- lots of updates are sent even if the other caches don't use that value frequently
sasthana
How are updates propagated really? I mean how does the core writing a new value knows that which other caches need to be updated? Is this also snooping based? Is the new value to be updated also snooped? If it is snooping based then I don't really understand why is there high bandwidth requirement.
haibinl
Yes, as @thomasts said, update-based protocols will require higher bandwidth, since data is moved around every time there's an update. Imagine that a processor with 8 cores, where 1 core is in Modified state for variable X, while the other 7 cores used variable X once and the cache line stays in the cache, although these 7 cores don't use it anymore. The update protocol will broadcast the value of X to all other cores every time the core in Modified state changes the value of X. This creates lots of unnecessary traffic in between the cores, because the 7 other cores don't intend to use variable X in the future and could have just dropped the cache line instead.
arcticx
False sharing: when 2 variables (x and y) are in the same cache line, while x is read by one core without any modification and y is modified by another core, the whole cache line is invalidated (although x is not changed). See more
cyl
False sharing could be cared by choosing the right data structure, for example using padding to make each thread avoid accessing the same line.
Invalidation-based protocols: When writing, the cache gets exclusive access to it, and all other caches drop the line.
Update-based protocols: When writing, immediately update all other caches that have that same value. Pros: This will reduce cache misses since fewer lines are dropped. Cons: High bandwidth requirement -- lots of updates are sent even if the other caches don't use that value frequently
How are updates propagated really? I mean how does the core writing a new value knows that which other caches need to be updated? Is this also snooping based? Is the new value to be updated also snooped? If it is snooping based then I don't really understand why is there high bandwidth requirement.
Yes, as @thomasts said, update-based protocols will require higher bandwidth, since data is moved around every time there's an update. Imagine that a processor with 8 cores, where 1 core is in Modified state for variable X, while the other 7 cores used variable X once and the cache line stays in the cache, although these 7 cores don't use it anymore. The update protocol will broadcast the value of X to all other cores every time the core in Modified state changes the value of X. This creates lots of unnecessary traffic in between the cores, because the 7 other cores don't intend to use variable X in the future and could have just dropped the cache line instead.
False sharing: when 2 variables (
x
andy
) are in the same cache line, whilex
is read by one core without any modification andy
is modified by another core, the whole cache line is invalidated (althoughx
is not changed). See moreFalse sharing could be cared by choosing the right data structure, for example using padding to make each thread avoid accessing the same line.