Previous | Next --- Slide 6 of 56
Back to Lecture Thumbnails
PandaX

We have to provide this abstraction to processors because they don't know the existence of cache.

totofufu

@PandaX doesn't that depend on what kind of system we're talking about? I thought that later in the lecture, cache controllers are discussed, which allow each processor to essentially keep track of what stores/loads the other processors do, which also relates to what items in memory get cached.

PID_1

@PandaX totofufu is correct. The processor knows about the caches because the caches are physically a part of it. They are part of the processors own implementation, and are not an abstraction from its perspective.

However, from the perspective of the operating system or program, it is usually the case that we ignore the existence of any cache and let the hardware take care of it for us. This way compiler logic can assume the program's threads will be (arbitrarily) interleaved in some serial fashion, and OS logic can assume it's allowed to actually run threads in parallel, and everything will behave as expected because the processor uses coherence mechanisms to maintain the abstraction.

ZhuansunXt

Just wonder whether we will talk about cache coherence problem in a message communication system instead of shared memory parallel model. Imagine processors locate in different machines, and they communicate via MPI or something like that. How cache coherence is maintained then?

vincom2

@ZhuansunXt: the first bullet point on this slide might be relevant?