Previous | Next --- Slide 3 of 56
Back to Lecture Thumbnails
yey1

For write-through cache: when we write something to cache, the same data will also written back to main memory. This can simplify the design of computer system. For write-back cache: write to memory is deferred until the line that has been changed is discard from cache.

neonachronism

Of course, while write through caches can be simpler, they incur a greater overhead on repeated writes, because they must go back to main memory.

kevinle1

When would you use a write-through cache then? Since it seems really inefficient to have to write back to memory every time you write to the cache. If for instance, you have a system implemented this way, then running an operation that has mostly cache writes will cause the performance to be as bad/worse than just not using a cache since you have to write to both places right?

neonachronism

At the very least, write-through caches are simpler, while providing advantages for reads (and many programs read far more than they write). That being said, they don't seem to be at all popular for modern processor caches, for the above reasons.

For truly distributed systems, they can also be more reliable, because a write commit on a write-back cache doesn't correspond to a memory operation on a storage server - and by the time the cache is flushed, it may be too late to send it to the server (due to network partitions, etc).