So here, will the blue flits advance as long as the grey flits are blocked? What happens when the grey flits can advance and there is contention for the middle link in the diagram?
@maxdecmeridius In this example, the blue flits can still advance even though the grey flits are blocked, because the switch can hold both and isn't required to process them in queue order.
I would expect that, once transmitting, the blue flits continue to progress until they get blocked somewhere, and then the switch considers transmitting the grey flits again. That seems to leave open the possibility of starvation, though, so maybe it can be implemented differently.
To clarify: with multiplexing, we are NOT increasing the number of flits that can be transmitted over a single link. Instead it fixes the issue where if a flit at the head of a queue is blocked, all flits after are blocked (even if the link they need is not blocked).
Are these virtual buffers in each switch created dynamically (i.e. the number of virtual buffers can increase depending on the number of incoming flits)? Or is it fixed? If it is fixed, doesn't multiplexing have a packet dropping issue (presumably the number of buffers is less than the size of the queue in the non-multiplexing solution)?
@trappedin418 I think the buffer space is fixed, which is restricted by the hardware.
There are some methods to manage buffer and one way is to keep track of free buffer space in the downstream nodes in every switch. A flit cannot proceed if the switch cannot allocate enough buffer space in the next switch.
In virtual channel flow control, does there exist a queue? Equivalently, is each virtual channel a queue that can contain flits from different packets? Or each virtual channel can only contain one flits?
Edit: looking at the next slide, I guess each virtual channel can contain flits from more than one packets. So there exists a queue within each virtual channel? In this way, maybe either the blocking problem or the deadlock issue cannot be completely avoided...
The reason to bring in virtual channel is to decouple allocations of buffers and channels. Without VC, if buffer is allocated to packet A, no packet except A can use associated channel, so A can block others needing same channel.
Each VC consists of a flit buffer + other states, and multiple VCs per physical channel means multiple buffers per physical channel. In this way, the original FIFO buffers(queue) become "multi-lane" buffers, which increase network throughput with greater utilization of network capacity and more freedom in allocation of resources, e.g. priority.
Intrinsically, blocking can never be absolutely avoided since resource is limited. Virtual channels are allocated to packets and can be reassigned when last flit of packet exits. If unfortunately no virtual channel is available, packet blocks.
From what I understand, essentially virtual channels remove the FIFO constraint of a queue (i.e. a physical channel), and allow out-of-order passing of flits onto the next hop.
The motivation for doing so is to allow concurrency (e.g. if a flit is to be passed to a busy node, it should not block other flits) and allow QoS (some items have higher priority than others).
To my understanding, the physical channel is used for transmitting flits, only one flit can be transmitted at a time if there's only one physical channel. Virtual channels are built upon physical channels, every virtual channel has a FIFO queue, different flits in different VC's queue will not block each other, but flits in the same FIFO queue will be forwarded in order. The virtual channel implementation will decide which head flit of queue in which VC could be sent over the shared physical channel.
@RX That makes sense -- this sounds like in airports where there is a line for "regular passengers" and another for "priority passengers," and the airline staff determines which person is to go next.