Previous | Next --- Slide 22 of 35
Back to Lecture Thumbnails
kayvonf

Write-through caches ensure memory always has the most up to date value stored in an address. This simplifies coherence, since to obtain the latest value stores at an address, the requesting processor can simply issue a memory load. Write-back caches pose new challenges to maintaining coherence, since the most recent value stored at an address may be present in a remote processor's cache, not in memory.

tpassaro

@kayvonf Is it faster for one cache to supply data to another cache, or does it have the same cost as fetching the value from memory? Mainly, is there a fixed amount time for getting data around on the interconnect?

zwei

I assume that traveling the interconnect is faster than going to main memory, but I'm curious if the cpu will send out a load request to each core and in parallel send out a request to main memory in case none of the cores have the value. The pipelining would mean this system takes just as much time to read from main memory as before, but may be too complicated to implement...

pebbled

@tpassaro While performing a load from another cache at the same (or lower) level would certainly require less time, I think the logic to implement such an operation would be prohibitive. The primary issue I see is that there isn't any way for one processor to know what's in another processor's cache without doing something silly like asking it to load the value (which could of course result in a load from main memory just as often as a cache hit). A shared memory controller could try to keep track of this information, but that's exactly the point of the next level of cache- if one proc pulled data in recently, there's a good chance that data is still in the cache which is the lowest common ancestor of the two (L3 in most architectures today).

pebbled

Just kidding... kind of.