Previous | Next --- Slide 9 of 43
Back to Lecture Thumbnails
mnshaw

This graph shows that as the number of processors increase, for larger problems there is a rise in speedup. However, for small problems, there's no point in adding more processors and it can even lead to some slowdown. This shows how you don't always want to parallelize or increase the number of processors.

chenh1

The shape of the graph might result from the tradeoff between the speedup from parallelism and cost of communication. In large problems, the speedup overweigh the cost and while the communication cost overweigh the speedup in small problems.

jocelynh

Is there a good rule of thumb for determining when a problem is "big enough" to benefit from parallelism? Or would one just have to make a graph and find out what the speedup(/slowdown) is?

Master

In the graph, as #processors goes up, arithmetic intensity dropped, communication overhead goes up. The 32 processor, N = 130 case is worse than a single-core version, as the work assigned to each processor is too small that the communication takes even more time than computing them on a single processor.

By the way, all the values of N here is 2**n + 2, due to the fact that every box has to touch extra two edges to do the computation.

o_o

This graph basically illustrates how for some problems the communication costs of parallelism make it less efficient to use parallelism. I feel like there isn't that easy a way to tell if a problem will benefit from parallelism, but it seems that the larger the N the more likely it will help.

life

@jocelynh: I heard in the 15-210 course, there is a concept called granularity control, which means there is a threshold for a solution to switch from parallel to sequential algorithm when the problem size falls below a certain threshold. Maybe you can check this: https://www.cs.cmu.edu/~15210/pasl.html