Previous | Next --- Slide 46 of 46
Back to Lecture Thumbnails
kayvonf

Question: Here's some fodder for student comments. What did you take away as the biggest point of this lecture?

jmc

"Parallelize it" is not nearly as simple an answer as it sounds; the idea of computing pieces at the same time is the easiest part, and it's the work scheduling, synchronization, communication, and machine-level details that gets tricky.

hpark914

For me, the biggest point seemed to be that since single core CPU performance stopped improving improving, it is up to software developers to use parallelism in their code in order to improve speed. Hence, the title of the lecture.

ayy_lmao

Because of various factors in the manufacturing process, speedup in computation is no longer the responsibility of the processors themselves, but instead of the programmers to write smart parallel code.

sadkins

Humans are not very efficient at mental addition.

Tiresias

Parallelism's popularity is a result of technical limitations on single threaded processors. Otherwise, people don't want to try parallelize because it's too hard. Too many considerations.

BestBunny

As single core performance has progressively reached physical limits, parallelism in programming has become a necessity in improving performance and efficiency. Furthermore, as @jmc points out, coming up with the algorithm for parallelizing is the part that takes the least amount of time/effort. The actual steps of execution of the algorithm (such as reducing communication costs) is what makes parallelism difficult to accomplish efficiently.

koala

While we have likely reached physical limits to what we can accomplish using CPUs/GPUs, we can continue to improve our algorithms by parallelizing. The responsibility has shifted from the hardware designers to the software designers. In addition, it's also important for algorithm designers to think about how the hardware is implemented.

albusshin

Parallelism is ubiquitous at the time being. It's essential that a coder knows some level of how to utilize the parallelism of machines to write efficient code. The clock rate of a single core has hit the ceiling, and the easiest way that we can improve the performance of our code is to understand how to use parallel processing units accordingly.

mnshaw

The biggest point I took away was the importance of this course now, as we have hit a wall in terms of improving computation by adding more transistors. Therefore now the weight of improving efficiency and performance is on parallel software. In learning to write parallel programs, an understanding of the hardware is also important.

nemo

This lecture sets the right motivation for why writing parallel programs is a must know skill given that hardware improvements have plateaued. It is also clear that parallelizing code is a non-trivial problem where overheads need to taken into account (parallelism vs efficiency).

yes

I realized that there is so much more to parallelization than just running on more than one processor, and that there are many things that need to be taken into account, such as communication overhead.

eourcs

The difficulty in parallelizing programs does not come mainly from how the work is divided, but in how the results to sub-problems are merged to yield the final result. Overhead in communicating results between parallel processes means our speedup curves often deviate heavily from the ideal (this merging step introduces sequential dependencies).

This is in line with our intuition about divide-and-conquer algorithms where often the 'merge' step is by far the most complicated (both in terms of code complexity and asymptotic complexity).

emt

A fast program does not necessarily mean an efficient program. If the program is not dispensing work efficiently among threads or isn't accounting for communication overhead or synchronization issues, then it is most likely not optimal. And in these days, when single-core processor performance is slowing down, it's important for programmers to be able to optimize parallel code.

paracon

The tradeoff between efficiency and speed-up is critical while deciding to parallelise a program. In some cases efficiency is more important while in some others speed-up at any cost is the key.

lfragago

The approach, until recently, to improve the computing performance was based in increasing the operating frequency of the processor and optimizing its execution of instructions. Now, due to power limitations operating frequency can no longer have a substantial increase, so now the approach to increase the computing performance is to have various processors working in parallel.

Two things to notice are that:

  1. Doing parallel computation implies having some overhead to synchronize each part of the parallel process done by the CPUs

  2. Care has to be taken in the way work is assigned to each CPU (load balancing) since subtle changes in the work scheduling can dramatically affect performance.

But in general working in parallel is much more efficient than working with a single CPU.

haoala

Being able to write parallel code is a useful and marketable skill.

Penguin

I did not know that much of the speedup we were getting in recent years was due to the work of engineers and not strictly due to improvements in hardware. As a result, in order to continue seeing speedups, we have to place parallelism in our code ourselves because the hardware is no longer going to continue speeding up our sequential code at the same rates for free.

butterfly

The biggest point I took away from the lecture is that communication overhead is the most dominating factor affecting performance of Parallel systems. Since clock frequency cannot be increased beyond a point, the only way to improve performance now is to add parallelism. Parallel programs should be written efficiently ensuring good utilization of resources and good work distribution.

POTUS

Parallel computing results in overheads, just like in the demo when the guy ran down the stairs. This could substantially increase the time spent solving a problem.

rohany

there is a big difference between designing parallel algorithms, and actually acheiving good speedups on hardware when implementing them

bazinga

Hardware techniques for increasing computation speed no longer suffice. For instance, instruction level parallelism has inherent limitations due to instruction dependencies. Likewise, improvement in processor clock rates are limited due to heat dissipation issues. Hence, writing parallel programs is the only way to improve speed. However, while designing parallel software, it is important to consider the major factors that prevent speedup of parallel programs that include communication and synchronization overhead, efficient work distribution that exhibit good resource utilization.

aperiwal

In the lecture, we were introduced to the power wall which is the main reason processors couldn't scale traditionally anymore, leading to parallel programming. To exploit parallelism efficiently however, we need to understand the limiting factors like work-load balance, synchronization and inter processor communication.

muchanon

What I took away as the biggest point in this lecture is that any additional speedup that we will get from our programs will not come for free, so parallel programming is becoming an increasingly important skill for anyone who needs to get their code to run increasingly fast. The most important part of parallel programming being the pitfalls which prevent us from reaching our desired or expected speedups.

Levy

It's important why we cannot use a single core with very high frequency and strong computation capabilities. This is the existing reason for the whole class, the motivation for a brand new subject, and also the interesting cause for what Intel has done when it faced the choices of developing a higher frequency Pentium or something like Core after the success of Pentium 4.

GWE_100

This lecture really clears many notions I was confused about:Concurrency vs Parallelism, speedup by hardware vs code, fast vs efficiency, transistor size decreasing speed vs clock rate increasing speed, etc... I also got to understand why it's important to write code to enable parallelism. As speedup brought by hardware upgrades become smaller, before the day when super intelligent compiler comes out, we have to learn parallel programming to speedup out programs.

lya

The first lecture introduces the motivations of the course from two perspective. On one hand, it is necessary to write parallel code to speed up the programs, especially in recent years when the instruction-level parallelism and hardware techniques nearly stop to grow. On the other hand, it is not easy to develop efficient parallel programs. For example, from the demos, we found the benefits of parallelism (with little design) is not as much as we thought. Those two intriguing points might answer the question that Kayvon asked at the beginning of lecture: why are we here to study this course.

jiangyifan2bad

The heavies impression this lecture left on me was that parallelism is the only way (sooner or later) to keep enhancing our computing power. The increase of computing power these years establishes the foundation that a lot of recent breakthroughs in artificial intelligence base on. Therefore, parallelism has its indispensable role in bringing human more excitement and innovations in the future.

sushi

For me, the biggest point of this lecture is to show the motivation of writing and running parallel programs. The main reason is that because of the power wall, the processor clock rate stops increasing. Therefore, the program cannot get faster for free. However, it's not easy to write high-performance parallel program because it's critical to maintain load balancing.