Previous | Next --- Slide 13 of 64
Back to Lecture Thumbnails
chenh1

When N is large enough, the sequential execution property is not changed, but we can think the time for critical section execution is cut off by 10 times, thus the speedup limit is loosened.

Nup

Could reducing the granularity result in uneven distribution of workload? As now each task will have to test the primality of 10 elements instead of one.

coffee7

@Nup making the the granularity more coarse (i.e. making the number of elements in a task) could still result in an uneven workload. For example, if there are 10 really large numbers in a row, the worker who takes that task will have a larger workload compared to a worker with 10 small/medium numbers. It largely depends on the numbers in the input and their order.

As @chenh1 has said, the case for making granularity more coarse has to do with the time in the critical section vs. the time actually executing the task. If it takes more time (on average) to execute the critical section than to complete 1 task, it may be a good idea to make the granularity more coarse.

mnshaw

By increasing the granularity of the counter, the code doesn't waste as much time going into the critical sections. In this case, it decreases to one-tenth the number of times it needs to increase the counter. The time in the critical sections are overhead that doesn't appear in the serial solution, and we want to minimize it. However, it would lose meaning if we increased the granularity too much.

apadwekar

Visually it is also clear that the parallelizeable code (in blue) takes up more proportion of the computation time compared to the sequential code (aka critical section in white) when task granularity is increased.