Previous | Next --- Slide 12 of 36
Back to Lecture Thumbnails
kayvonf

Question: It is common (but quite imprecise) to use the terms efficiency and performance interchangeably in discussions of parallel programming. Also, we often hear students use the term "speedup" to generally refer to a parallel program "running faster" (It sped up!). These terms are related but actually mean different things. How are the terms performance, efficiency, and speed-up different and related?

rbenua

The concepts of work and span, which should be familiar to you from 150 and 210, are important to keep in mind here. The total "work" done by a program is the total amount of computation done by all processors, whereas the "span" of the program is the length of the longest chain of operations that must be done sequentially.

In the more algorithmically-focused 150 and 210, you mostly assumed you had an idealized infinitely-parallel machine and tried to minimize span when possible. We'll talk about some difficulties with this approach in real programs, including some cases where it's beneficial to increase the total work of the program to reduce the need for communication between different processing units.

smklein

In class, we discussed the "2x speedup on a 10 core machine" in the context of "would your boss fire you or give you a raise with this result?"

There were justifications on both end of the spectrum --

"Fire you" -- If the program has the potential to be more parallel, then 2x on a 10 core machine could be a waste of resources. This speedup may not be worth the resources, especially if speed is not important, yet cores are expensive.

"Give you a raise" -- If cores are cheap, and/or the speedup is valuable. The example given was a 2x speedup with stock trading, where this increase in speed may result in "billions of dollars of profit". This would probably be worth the cost of a 10 core machine, even though it is not executing at full efficiency.

kkz

Here is how I currently view the above terms.

  • Performance: An absolute measure of speed (ie. execution time/problem size)
  • Efficiency: Speed-up/#processors
  • Speed-up: As given in class, [Execution time with 1 Processor]/[Execution time with P processors]
lixf

Question: According to the definition of speed-up, the speed-up is bounded by the # of processors. (equal if we assume there's perfect distribution of work and no communication overhead.) Does this mean for a really slow algorithm can only get linear speed-up?

arjunh

@lixf I believe that the constraint is tighter than that; while an increase in the number of processors (assuming perfect distribution of work and no communication overhead) will yield speedup, there is a bound given by Amdahl's Law. The law essentially states that the speedup for a program is bounded by the fraction of the program that is inherently sequential (ie there are dependencies in the code that prevent parallel execution).

For example, suppose that 25% of our program is inherently sequential, while the remaining 75% is parallelizable. If we have enough processors, we could theoretically perform the parallelizable portion completely in parallel. However, the remaining 25% of the program has to be done sequentially. This bounds the speedup by 4x; no matter how many processors you utilize, your speedup will be no better than 4x.

aaguilet

Question: I'm still not sure about how are we measuring efficiency. Isn't it related to the percentage of the resources used in a specific period of time? Let's say you are doing video processing and need to deliver 30fps, and you can achieve that with a slow processor working 100% of the time period, or with a faster processor working 20% of the time. Isn't the slow processor more efficient? (the faster processor would use more power, to begin with...)

iamk_d___g

@aaguilet: I think efficiency, just like in physics, is a term measuring (amount of work done)/(prices paid doing the work). So the way of computing the prices paid matters. In your case, if we take the money paid for processors as the price, and the fast processor is only 2 times expensive, the slow processor still has lower efficiency (the fast one could do other work meanwhile).

srk

They way I like thinking about the terms is as follows...

  1. Performance - Performance is how much work was accomplished within a given time and resources. So for instance, if you have a job and your boss has to measure your performance, he will look at how much you work you have completed in the given deadlines(time constraints) and resources that he has given to you

  2. Efficiency - I think efficiency is a measure of how many resources you need and the benefit you get by using that many resources. So as someone mentioned, it is the speed-up divided by the number of resources used to complete the job.

  3. Speed-up - As defined in lecture the speed up does not refer to if a job runs faster but it compares the execution time with one processor in comparison with p processors.

kayvonf

Efficiency can be thought of as work done per unit resource. That resource might be money, transistors (chip area), power expended, etc.? Whatever the case, a program is efficient if it is doing more with less.

This definition goes beyond the domain of computing for sure. For example, when you are efficient with your time, it means you got a lot done in a fixed amount of time. Example: "Wow, I had a really efficient day today. I wrote five comments on the 15-418 web site!"

azeyara

an interesting example that utilize the efficiency in term of cost, among other resources, is the MapReduce programming model. It only depends on commodity machines and is highly scalable.

analysiser

Question: If Performance means the workload done by given amount of time / resources, then isn't that Performance works the same with Efficiency? If the only difference is Efficiency implies the work done per unity resource.

ron

This ambiguity is why I prefer kkz's definition of performance, which I understand as a measure solely of the time taken for a program to complete on a given workload. We can improve the performance of a program on some workload by giving it more resources, without changing the program itself. We can make a more efficient program by changing the code itself to make better use of resources.