Separating the BUS into several different lines allows this pipe-lining to occur because it lets different requests start before others have finished. This is only possible because it breaks a request into sections and once a request/response is done with that section, it can let another message be sent on that line. Overall, it increases the throughput and decreases the latency for requests.
This comment was marked helpful 0 times.
All the complicated communications and designs here are just trying to make a synchronous transaction asynchronous. Designing an asynchronous system is not easy even on the software layer. Great work!
@ycp: I think pipeline increases throughput but not necessarily decreases the latency.
Is latency decreased or not?
I think it is because now as against to the case of an atomic bus other processors don't have to wait until the bus is done with both request-response. Thus the average latency will be lower.
@iamk_d___g: I think the decrease of latency is a result of latency hiding. For a single transaction, the latency is not decreased whereas due to multiple transactions could be running at the same time as opposed to what's happening on atomic bus, the latency is hidden.
This comment was marked helpful 1 times.
I agree with @shabnam, latency decreases because most threads will spend less time waiting for the bus. The pipelining allows the second memory transaction to go ahead and make its request while the first is waiting for its response. On an atomic bus, the second transaction would instead have to wait for the first to receive its response before making its request.
This is very similar to the washer/dryer problem we looked at a while ago. I believe that any single transaction, disregarding other transactions, does not get decreased latency. However, once multiple transactions are considered, we can see the latency hiding in action.