Previous | Next --- Slide 7 of 56
Back to Lecture Thumbnails

When a processor wants to write to memory. It checks if the address is in the cache. If it isn't, it loads it into the cache (possibly evicting something). Changes the value in the cache, and sets the dirty bit. When another value is ready to evict the current address, it will then store the value in the cache to memory. In the above example, each core changes the value at an address space locally, even though it is a global address space.


In the chart, when each processor wants to load X, it only checks locally cache. If there's a miss, they will initialize X to zero and do operations based on that. These values of X are never written back to main memory until it is forced to flush back to main memory(in the case above, the load of Y). Some coordination across different local caches should be applied.


@qqkk "If there's a miss, they will initialize X to zero and do operations based on that" This is incorrect. The value of X is not initialized to 0, but read from memory (which is 0)


@articx is correct. On a cache miss the contents of the required memory address will be loaded from memory.


Am I correct in understanding that this slide aims to illustrate that when multiple processors are sharing a main memory, our intuitive notion of shared memory is broken because any location in memory does not hold a singular value any more.

And that is a result of duplication into processor-local caches which falsely promise to represent the same data as their corresponding locations in main memory.