Previous | Next --- Slide 5 of 56
Back to Lecture Thumbnails
Josephus

If we're writing to memory, why do we need to load the cache line from memory? The line in memory may contain data that isn't being written to, so the entire line must be copied to ensure that the data isn't erroneously overwritten.

zvonryan

This might be a concern for temporal locality. We might expect read(s) or write(s) to happen near an address where a write just happened.

418_touhenying

@Josephus Maybe in a machine that supports Direct Memory Access, this might not be the case.

thomasts

Note that with a write-through no-allocate cache policy, the processor can write straight to the cache. But then we don't get the benefits of spatial locality, as zvonryan points out.

sasthana

Is there any policy like write-through but allocate the cache line? This should provide the benefits of temporal locality because the cache line is allocated but we won't need a dirty bit.

1pct

Two common options on a write miss are Write Allocate and No write allocate. Two options on a write hit are Write Back and Write Through. Write back caches usually use Write Allocate and Write Through caches usually use No Write allocate.

makingthingsfast

I do remember discussion about how write through-allocate bypasses the cache which is only useful if we constantly grab different values. The cache allows us to take advantage of temporal locality.

doodooloo

If I remember correctly from 213, write-allocate/write-no-allocate are policies concerning write misses, but write-back/write-through are actually policies concerning write hits (though the title of the slide does not reflect that).

enuitt

@Josephus This has to do with spacial locality I believe. But the professor mentioned that there are someways you can tell the processor that you will be only writing to a specific address and there is no need to load the entire line into cache