Previous | Next --- Slide 33 of 72
Back to Lecture Thumbnails
nemo

Can somebody explain why bandwidth is higher as we move up in the hierarchy? I am thinking of bandwidth in terms of let's say the number of lanes in the PCI bus...

muchanon

Bandwidth is a rate. It is how much data can be transferred in a period of time. So as we move up the memory hierarchy, we will likely experience higher bandwidth because we can access more data in some period of time.

jkorn

@nemo to add on to the above answer, recall from the car example that we can increase bandwidth not only by adding extra lanes or sending more cars closer together, but also by simply making the cars go faster (i.e. reducing latency). If you reduce the latency, then you increase the rate of transfer. Similarly here, as you move up in the memory hierarchy, you get lower latency access - for instance, the L1 and L2 caches are directly on the chip itself, so if the CPU requests data that is located in these caches, it will come back much faster than if it has to go off-chip to the L3 cache, or possibly even to memory.

anonymous

For the memory hierarchy, we can also add price as a metric. From top to bottom, the price of 1KB size memory becomes much more cheaper.
Register
Local L1 - SRAM
Local L2 - DRAM
...

shreeduh

@nemo, The higher bandwidth is due to lower latency as we move up the memory hierarchy. The lower latency stems from chip level electrical behaviour. Larger the memory, slower the read/write speed, due to the read/write buses reaching out to larger number of storage cells, increasing the capacitances on the bus, and hence slowing it down. So smaller memory devices(as well as better technology-SRAM vs DRAM, at a higher cost) are placed closer to processing units, and with locality, give good performance.