Previous | Next --- Slide 14 of 43
Back to Lecture Thumbnails
jellybean

In the lecture example, we discussed whether it would be worth using 10 people to increase the speed of a counting algorithm. Depending on the use case, it may or may not be worth the additional resources of 10 people (or 10 processors) to speed something up.

In addition to the trade-offs of adding processors, will we discuss storage efficiency at all in this class? Is there ever a case where a particular algorithm may be efficient in its speed and use of processors, but require increased storage space?

hofstee

@jellybean: Sure, normal sequential sorting algorithms run in O(nlogn) average case. If we consider bucket sort, we can (sometimes) improve the bound to O(n+k) average case performance, but drastically increasing the space required from O(1) to O(range of elements in your data set).

If you have a stupidly large amount of memory (not storage, generally) this can actually be significantly faster than a traditional sort.

grarawr

I think that it was a good point to point out that efficiency doesn't necessary fast since coming from an economic background, generally when we mention efficiency, we are talking about productivity. Usually when we think about efficiency in CS, we mostly talk about speed and space efficiency. Are there any other factors that are considered efficiency problems?

hofstee

@grarawr in the ECE dept. here I know there are a few people working on chips powered by radio signals. So in this case they deal with extremely low power, and intermittent power. In this case the primary concern for efficiency is fault tolerance and low power consumption.

ote

When trying to find further resources online about efficiency in parallelism, I came across this link which I found very interesting. Both answers brought up some interesting points, but the first answer suggested an idea that I was a little unclear about, namely that "a well-scaling parallel algorithm need not be efficient at all if it is easily outperformed by a good sequential algorithm." I was a little confused by what was meant there, and would really appreciate any clarification or ideas.

grarawr

@ote Since we are using parallel algorithms because if the possibility that they are more efficient than a sequential algorithm, if the sequential algorithm for a problem performs better than a parallel algorithm, then we might as well just use the sequential algorithm. I think most of the time, a sequential algorithm is easier to code up too.

kayvonf

@grarawr: if you are a hardware architect, you'll be very worried about area efficiency: performance per unit chip area (in other words, performance per transistor). Both hardware architects and software developers are now very concerned about energy efficiency: performance per watt. This is a critical metric (and perhaps now the most critical metric) when building systems for data centers, supercomputers, and mobile devices.

taylor128

What are some common ways for the compiler to discover/exploit parallelism inside a program? From lecture, Prof. Kayvon mentioned that one (current) approach is to develop parallel programming languages to help compiler with scheduling and granularity control for switching between parallel and sequential algorithms.

teamG

@taylor128, adding on to what he said, one good example of writing code with granularity control in mind is what we did in pasl lab in 15210. However, I remember when I worked on pasl lab, sometimes forcing parallelism on a program for certain inputs, especially smaller inputs will actually slow down a program. This kind of supports what we've seen in lecture so far, where the cost to parallel certain programs actually make it slower than doing it sequentially.