Previous | Next --- Slide 9 of 49
Back to Lecture Thumbnails
cwchang

The directory stores the information about whether a certain line is stored at a certain processor. Every line of the memory has a directory. In the previous lecture, every time one processor ask for a data, it need to shout out "I'm going to write / read this block". In this lecture's implementation, if a processor need to do a memory operation, it doesn't need to yell, it just go to the home directory to require necessary information.

BigPapaChu

I'm confused about how a directory would be any better than the typical shout that this block is being read method. In the end we still have to look through the entire directory to determine (especially if its sparse) which ones are being used, does the overhead for shouting to everyone really take that much more time?

firebb

@BigPapaChu I'm not sure what you mean "look through the entire directory to determine (especially if its sparse) which ones are being used". If you mean looking at the bit array and figuring out which cores has this cache line, I think this could be done fast by the hardware. In my opinion, directory-based cache coherence involves less communication than broadcasting. By broadcasting, one may also need to wait for all the responses, so when the system scales, this may cause problem.

Master

Just a note here:

The bits that indicate the status (exclusive, shared, etc.) of the cache line is in the cache.

Directory is for every line of the memory.

sushi

Seems that because the directory stores status for every cache line of the memory, its size might be very big. So where the directory will be stored? disk or cache? If it's in disk, then it will cause a lot of latency reading it. And if it's in the cache, it's expensive to do so.

Penguin

In later slides (25+) we go into how we can reduce the storage overhead of the directory. Directories would be very large if they were implemented in the way described in the slide.