Previous | Next --- Slide 3 of 37
Back to Lecture Thumbnails
caryy

Question: Why might most modern processors have a write-allocate, write-back cache instead of a write-no-allocate, write-through cache?

top

@caryy It seems that if a processor were to implement write-allocate and write-back then we are trying to avoid memory accesses as much as possible. This is probably because the overhead of accessing memory is higher than the overhead of updating the different levels of the cache heirarchy

kamladi

@caryy Because on a multi-core system, the L3 cache is shared across cores. Let's say threads across cores are trying to write to the same cache line. If a write-no-allocate, write-through cache was used, write misses will go straight to memory, not storing the line in cache. Therefore threads on other cores will keep incurring cache misses, because no one is loading the line into cache.

cgjdemo

@caryy The write-back cache reduce the number if write to main memory so it will yield performance. But there is risk that data may be lost if the system crashes. But this is rare and performance is often the top priority.

BigFish

@caryy Write-allocate, write-back cache can greatly reduce memory access if the program need to reuse data. However, as in the last question of assignment, since there is no reuse of data, we would actually like to bypass cache and write directly into memory. In this case, cache will increase memory latency.

iZac

Don't some modern processors have mixture of this? That is, L1 and L2 cache could be write-allocate, write-back while L3 cache could be write-unallocate, write-through.

If computer architect expects frequent DMA accesses this might be favorable.