If we're writing to memory, why do we need to load the cache line from memory? The line in memory may contain data that isn't being written to, so the entire line must be copied to ensure that the data isn't erroneously overwritten.
This might be a concern for temporal locality. We might expect read(s) or write(s) to happen near an address where a write just happened.
@Josephus Maybe in a machine that supports Direct Memory Access, this might not be the case.
Note that with a write-through no-allocate cache policy, the processor can write straight to the cache. But then we don't get the benefits of spatial locality, as zvonryan points out.
Is there any policy like write-through but allocate the cache line? This should provide the benefits of temporal locality because the cache line is allocated but we won't need a dirty bit.
Two common options on a write miss are Write Allocate and No write allocate. Two options on a write hit are Write Back and Write Through. Write back caches usually use Write Allocate and Write Through caches usually use No Write allocate.
I do remember discussion about how write through-allocate bypasses the cache which is only useful if we constantly grab different values. The cache allows us to take advantage of temporal locality.
If I remember correctly from 213, write-allocate/write-no-allocate are policies concerning write misses, but write-back/write-through are actually policies concerning write hits (though the title of the slide does not reflect that).
@Josephus This has to do with spacial locality I believe. But the professor mentioned that there are someways you can tell the processor that you will be only writing to a specific address and there is no need to load the entire line into cache