Previous | Next --- Slide 12 of 35
Back to Lecture Thumbnails
Mayank

Question Point 2 says "if the read and write are sufficiently separated in time". Doesn't a write operation end only after all the logic for cache coherence has been completed. So can this condition be re-worded as: "if the end time of write < start time of read."

jpaulson

@mayank: This is an abstract definition, so saying "a write operation ends only after all the logic for cache coherence has been completed" isn't necessarily true. A memory system could be coherent without meeting your requirement (but still meeting the three requirements on the slide).

That said, I'm curious how this works on actual CPUs: do writes return before all the cache coherence logic is done? Anyone know?

kayvonf

This is lecture 13!

pdixit

Just to explain write serialization with an example - with only one shared variable

Initially :

x = 0 

Launch 4 threads on 4 CPUs :

Proc 1    |  Proc 2  |  Proc 3   | Proc 4

x = 1     |  x  =2   |  while (1)|  while (1) 
                       print x   |  print x

With Coherence, output may look like  :
Proc 3 :  00012...  (or 0021111.....)
Proc 4 :  00112...  (or 002221111...)
==

But if output looks like :
Proc 3 :  000221...  
Proc 4 :  001122... 

That indicates write serialization is broken. Hence coherence is not satisfied. It may be surprising, but there are processors which do not guarantee write serialization and will ask programmer to put synchronization barriers in order to ensure expected behavior. It makes sense because if you write such a program in real world scenario you'll probably want to add synchronization to enforce certain ordering.

(I'm forced not to use synchronization or another shared variable as flag in this example because here we're talking about coherence. As soon as we introduce two shared variables we start talking about ordering of loads/stores to two (shared) variables. Then the behavior is defined by consistency model of the processor - which is treated differently.)