Previous | Next --- Slide 39 of 65
Back to Lecture Thumbnails
sss

Here's analogy, alluded to in class, for latency and bandwidth: Suppose the professor is requested to take some bits from one side of the classroom to the other.

He grabs two bits with each of his hands and walks to the other side, taking 10 seconds. The time it took for the data to reach the other side, or latency, is 10 seconds. His bandwidth will be 2 bits/10 seconds or 0.2 b/s The professor can instead run from one side to the other, taking 5 seconds. So his latency will be 5 seconds and his bandwidth will be 2 bits/5 seconds or 0.4 b/s.

The professor can grab a bag and stuff it with 5000 bits and run to the other side. His latency will be 5 seconds and his bandwidth will now be 1 kb/s. An interesting note here is that if the professor grabs 10000 bits and walks to the other side, his bandwidth will still be 1 kb/s but his latency will be 10 seconds.

An expansion of the interesting note is the latency and bandwidth of satellite internet. Satellite internet signals typically have to travel very far causing latency to be in the 1000ms region as opposed to 30ms average in regular terrestrial internet. This causes the 15MB/s uncongested satellite internet to still be slow for some web pages or games yet score high on bandwidth tests. This is also called the long fat network or LFN (also pronounced "elephant") and have been considered in the design of an extension to TCP/IP.

idl

Does physical distance between memory and the processor matter at all here? Not sure if this is a silly question, but of course with the analogy that Kayvon illustrated in class (of him walking back and forth at the front of the lecture hall), that would have been an obvious solution.

kayvonf

@idl: in fact it does. Long wires increase cost, incur more energy to move data across, and suffer from longer communication latency. I bet Yixin has an even better answer. Yixin?

yixinluo

@idl: Kayvon's analogy in class is just a demonstration of bandwidth and latency. For DRAMs in your laptop, the actual memory latency is dominated by wire/logic delays inside a memory module. Using numbers from JEDEC DDR3 standard, it takes 15-50 ns to get a cache line from DRAM modules, and only ~4 ns to transfer from DRAM to CPU.

idl

@kayvon @yixinluo I see, thanks!

kayvonf

The latency of an operation is the time to complete the operation. Some examples:

  • The time to complete a load of laundry is one hour. The time between placing an order on Amazon and it arriving on your doorstep (with Amazon prime) is two days.
  • If you turn on a faucet (and you assuming the pipe connected to the faucet starts with no water in it) it's the time for water entering the far end of the pipe to arrive in your sink.
  • If I type ping www.google.com I get a round trip time of 30 ms, so I can estimate the latency of communicating between my computer and Google's servers is approximately 15 ms.
  • If there's no traffic on the roads, it takes 5 minutes to drive to Giant Eagle from my house, regardless of how wide the roads are.
  • If a processor requests data from memory it doesn't get the first bit back for 100's of clocks.

Bandwidth is a rate, typically stuff per unit time.

  • A bigger washer would increase your bandwidth (more clothes per hour)
  • A wider pipe would increase the flow into your sink (more gallons per second)
  • A multi-lane road increases bandwidth (cars per second)
  • A fast internet connection gives you higher bandwidth (100 gigabits per second)
  • Once memory receives a request, it responds by sending eight bytes per clock cycle over the memory bus.

Note in the above examples: there two ways you can increase the rate at which things get done: make the channel wider (get a bigger pipe, add lanes, a wider memory bus) and push stuff through it faster (increase water pressure, increase the speed limit, increase bus clock rate).