Previous | Next --- Slide 11 of 40
Back to Lecture Thumbnails
rohit

Isn't the phrase "that scale" at the end of the first sentence in the slide redundant? As programmers we write parallel code inherently assuming that it will scale out to as many processing units as possible. Or does scale refer to some other aspect of parallel programming (communication for example)?

arjunh

@rohit Good question, we'll examine this point in a later assignment where you will parallelize some simple algorithms both on a 'small' multi-core machine (ie the Gates machines) and then on really large machines with over a 100 cores. You may be surprised by some of the results!

The reasons for sub-linear scaling are numerous and can involve communication (as you noted), but other things, such as contention for locks/resources, locality of memory accesses, hardware implementation of memory coherence protocols and so on. A well-written parallel program may very well falter on certain architectures that have unfavorable characteristics, which is why writing high-performance parallel software is so difficult.

hohoho

Based on my experience with 15-210, Some algorithm are not well paralleled compared to others. I've solved the Maximum Consequent Subarray Sum (the first problem in 210!) on www.leetcode.com using two different algorithms. The first algorithm simply traverse the array, while the second algorithm is the Divide and Conquer algorithm that were introduced in 210. The second algorithm turned out to be slower than the first one. However, the second solution is better parallelable than the first algorithm. But yes, algorithm is only a small portion of writing a good parallel code. :)

abist

@hohoho I agree that some algorithms cannot (or hardly) can be parallelized, as we learnt in 210, but I think the traversing the array method for maximum subarray problem can be actually parallelized. Also here's a cool paper I found about it - Maximum Subarray parallel