Previous | Next --- Slide 10 of 66
Back to Lecture Thumbnails
anonymous

I don't quite understand how this will handle the case of LRU eviction. For the case that L1 has lots of cache hits, on some variable "a". And then then there needs to be an eviction, but the LRU from the L2 cache is "b." "b" will be evicted from L2, how is this extra added bit going to help to remove "b" from L1 as well, instead of "a"?

krillfish

The added bit should help the hardware determine what to do to maintain coherence. So if the extra bit is set, then when something is evicted from the L2 cache, the hardware will have to notify the L1 cache to remove it as well. It's not an automatic thing.

ZhuansunXt

I have the same question, how would this fix the problem of LRU, which may cause violation on inclusion? In the last example, b evicted from L2 but a evicted from L1. How does these two added bits prevent it from happening?

krillfish

The main problem from before was that L2 and L1 just evict based on their own LRU policy. The added bits allow for more communication between the two caches.

I'm pretty sure that's the case. In the no-bits case, L1 and L2 have no idea what's going on the other cache. I'm assuming that if L1 evicts something, it'll communicate to L2 that it did so that the bit is no longer set, and new values are flushed. If L2 needs to evict something and the L1 bit is set, it'll tell L1 to remove it as well. Basically, this added information allows the hardware implementation to have enough information to figure out what's going on and what needs to be updated.

ZhuansunXt

@krillfish Thanks much! That explains a lot.