Previous | Next --- Slide 2 of 30
Back to Lecture Thumbnails
dfarrow

It's interesting to note that each core's L3 is connected to the ring (which is unidirectional) in two places. This is primarily to reduce the number of hops from one core to any other core, which helps to reduce latency.

If the caches were only connected to the ring in one place, then getting a message from core #4 to core #3 could potentially require going though the graphics node, the system agent node, and cores #1 and #2 before reaching core #3. If the caches are connected to both sides of the ring, then core #4 can send a message to core #3 in just one hop.

cwswanso

Oh that's cool! I was wondering why they would ever use a ring (precisely for that reason)

Q_Q

Well, the other reason to use a ring is that the bus circuitry and implementing logic only has be able to send messages in one direction.

I think it is very likely that the intel designers place some flip-flops at each yellow rectangle, so that on each ring-clock edge, the values in each flip-flop for one yellow rectangle get forwarded to the next rectangle. If they had to implement bi-directionality, I suppose they would need twice the hardware, or some sort of scheduling system so that they could coordinate using a wire for two directions.