Previous | Next --- Slide 18 of 45
Back to Lecture Thumbnails
yey1

Why the latency is O(1) rather than O(N)?

amolakn

In reference to @yey1's question, what causes latency in an interconnect? I feel like that's at least what my misunderstanding is. If I wanted to go from one node to another, you still need to encounter switches on the way. But I'm guessing that isn't where the cost comes from, where does the cost come from?

ant

I have the same question. Does the latency arise from having to run a complex routing algorithm at each node/switch (which is unnecessary in this case)? Or does it arise due to the fact that a node may have to "back-off" from sending its messages if the network was blocking (which it isn't in this case)?

bpr

@yey1, looking back through other sources, the picture is perhaps misleading. Some images I've seen are like the one above, while other images I've seen show every node's output having N links, one for each node's input. Then the node can select between the links, each having O(1) latency.

In Parallel Computer Architecture pg 769, "[A crossbar] provides O(N) bandwidth, but the cost of the interconnect is proportional to the number of cross-points, or O(N^2)." On page 803, the book shows three different implementations of crossbars. The key thing in the above picture is that the horizontal input lines reach all switches simultaneously. The network arbitrates as to which input will drive the vertical line and that signal will then propagate to the destination without passing through any further links or switches.

yeezus98

How could we perhaps make crossbar interconnects easier to arbitrate at scale?