Previous | Next --- Slide 22 of 45
Back to Lecture Thumbnails
kayvonf

Yes, that's a good article.

dsaksena

I found it interesting how multiple rings are used to communicate faster, Prof Kayvon also taught us that in coherence with different lines for request, snoop, ack and data. Nice to see a picture of how it may look an hardware.

afa4

I don't really understand why each L3 is connected to the ring twice. Does it not lead to increased latency?

kayvonf

@afa4. Or perhaps decreased latency? (Why might this be the case?)

sgbowen

Are the interconnects bi-directional in the regular ring case? If not, I could see a latency decrease with two connections because you can choose the connection where the flow goes in the quickest direction. (But I may be wrong with this assumption.)

lament

@kayvonf I would guess that connecting each L3 cache twice, once on two different paths between the system agent and graphics (see diagram in the slide), would help decrease contention. If the left path from the system agent to graphics is occupied, an L3 cache controller can try the right-side path.

pmassey

If I'm not mistaken, having the ring be bidirectional means that there is never contention for the line, because any two pairs of processors can communicate at the same time.