Question: Why is a dirty bit necessary in a write-back cache? Is a dirty-bit necessary in a write-through cache?
This comment was marked helpful 0 times.
tpassaro
A dirty bit is necessary if you have data in a cache line which differs from that of main memory. Since a write-through cache updates main memory, it would not be necessary to have a dirty bit if the write to memory will happen before the processor looks at this cache line again.
This comment was marked helpful 0 times.
shudam
When a cache line is evicted, if it is dirty, write data back to memory and mark as "not dirty".
This comment was marked helpful 0 times.
TeBoring
Will there be contention for a cache line?
I mean before step 3, two threads keep on evicting the
current cache line.
This comment was marked helpful 0 times.
martin
For two threads on the same processor, I don't think this would be a problem, because they would access the same local cache. But for different processors, this is what might bring the problem of cache coherence.
This comment was marked helpful 0 times.
kayvonf
@shudam: To clarify, you are correct that eviction of a dirty line cannot proceed until the line is written back to memory, but there's no need to mark the line as "not dirty" if it's being evicted, the line is not longer in the cache!
This comment was marked helpful 0 times.
kayvonf
@TeBoring: It's certainly the case that the execution of two threads on a processor (via any form of multi-threading: hardware multi-threading or software-managed threading) can cause one thread's working set to knock the other thread's working set out of the cache. (This is called "thrashing", and something programmers may want to be cognizant of when optimizing parallel programs).
To prevent the "livelock" problem I believe you are thinking about, any reasonable implementation will allow the write to complete prior to evicting the line in the future. What I mean is, if the cache line is brought into the cache due to a write, the update of the line will be allowed to occur (completing the write operation) before any other operation can proceed to kick the line out.
I think an example that's closer in spirit to what you are envisioning is contention for the same line between two writing processors. As we're about to talk about later in this lecture, to ensure memory coherence, the writes cannot be performed simultaneously. A write by processor 1 will cause processor 2 to invalidate its copy of the line. Conversely, a subsequent write by processor 2 will cause processor 1 to invalidate it's copy of the line. If the two processors continue to perform writes, the line will "bounce" back and forth between the two processor's caches. All this is discussed in detail beginning on slide 17.
Question: Why is a dirty bit necessary in a write-back cache? Is a dirty-bit necessary in a write-through cache?
This comment was marked helpful 0 times.
A dirty bit is necessary if you have data in a cache line which differs from that of main memory. Since a write-through cache updates main memory, it would not be necessary to have a dirty bit if the write to memory will happen before the processor looks at this cache line again.
This comment was marked helpful 0 times.
When a cache line is evicted, if it is dirty, write data back to memory and mark as "not dirty".
This comment was marked helpful 0 times.
Will there be contention for a cache line?
I mean before step 3, two threads keep on evicting the current cache line.
This comment was marked helpful 0 times.
For two threads on the same processor, I don't think this would be a problem, because they would access the same local cache. But for different processors, this is what might bring the problem of cache coherence.
This comment was marked helpful 0 times.
@shudam: To clarify, you are correct that eviction of a dirty line cannot proceed until the line is written back to memory, but there's no need to mark the line as "not dirty" if it's being evicted, the line is not longer in the cache!
This comment was marked helpful 0 times.
@TeBoring: It's certainly the case that the execution of two threads on a processor (via any form of multi-threading: hardware multi-threading or software-managed threading) can cause one thread's working set to knock the other thread's working set out of the cache. (This is called "thrashing", and something programmers may want to be cognizant of when optimizing parallel programs).
To prevent the "livelock" problem I believe you are thinking about, any reasonable implementation will allow the write to complete prior to evicting the line in the future. What I mean is, if the cache line is brought into the cache due to a write, the update of the line will be allowed to occur (completing the write operation) before any other operation can proceed to kick the line out.
I think an example that's closer in spirit to what you are envisioning is contention for the same line between two writing processors. As we're about to talk about later in this lecture, to ensure memory coherence, the writes cannot be performed simultaneously. A write by processor 1 will cause processor 2 to invalidate its copy of the line. Conversely, a subsequent write by processor 2 will cause processor 1 to invalidate it's copy of the line. If the two processors continue to perform writes, the line will "bounce" back and forth between the two processor's caches. All this is discussed in detail beginning on slide 17.
This comment was marked helpful 0 times.