Previous | Next --- Slide 37 of 41
Back to Lecture Thumbnails
perfecthash

Professor Mowry mentioned that after assigning work to processors, we are only as fast as the slowest processor, or the one that has the most work. Therefore, it is important that we write code that distributes the work as evenly as possible. However, he also mentioned that if we run a complex algorithm that takes a lot of overhead, for even distribution of work, it might actually hurt the speedup. That wasn't something I had considered, and it's interesting that there are so many facets to consider when writing parallel code.

gogogo

Yup, more factors to take into consideration is also the overhead of creating threads, which can be costly in some cases. Also, in some cases, using locks can be also fairly costly, and should be taken into account when designing some systems. For example, in a cache, there are some eviction schemes that are less expensive (in terms of using locks) than the standard LRU (least recently used scheme).

BBB

Actually, even being as fast as the slowest processor is optimistic. As parallelism increases, the cost of communication between cores becomes a significant factor in run-time. Not only do we have to wait for the slow core to compute it's result, we also have to wait for it to pass that result along. We saw this in demo 1.