Previous | Next --- Slide 26 of 43
Back to Lecture Thumbnails

It seems that even today, effectively parallelizing code remains an arduous task that many developers forgo because CPU performance has reached acceptable levels (even without waiting for the next generation chipset). Do you think CPU performance has reached levels where parallelism is irrelevant for the majority of applications?


@pkoenig10 In my opinion, no. For many applications, for example a web application using a database, parallelism is still relevant, but oftentimes speedup can be achieved by taking advantage of existing technologies which handle the difficult parts of parallelism. For example simply starting many processes running the same code, and assigning tasks to those processes using a load balance will allow the code to be easily parallelized (if the application code can be run independently). Locking might be achieved through the use of a database engine, or other locking technologies.


I think the best answer to your question, as in most systems problems, will be "it depends on the workload and the use case". For instance, if your system has many users simultaneously accessing certain shared resources, we probably would definitely want to parallelize the way in which each we handle each user's requests. Or perhaps, almost any data analysis tool deployed on massive data sets would benefit from being parallelized. In these systems, it makes sense to spend time and effort in trying to find ways to parallelize the code. On the other hand, if you are building a two-person chat system, then it might not be worthwhile to find ways to parallelize the system, because the CPU will be fast enough to make the system "fast". (I can't think of better examples where spending time parallelizing the code is not justified, perhaps someone can suggest better examples?)


@pkoenig10 it seems like it is often the case that the importance of parallelism scales with the importance of the problem. At a small scale, parallelism might seem like an arduous task because there are trivial gains in actual run time at a high overhead in development. However, many of the interesting and important problems that we see being hacked away at today deal with massive amounts of data that needs to be processed in a small amount of time (scheduling, search, etc.).

@fleventyfive I think that examples of times we don't think of parallelizing our code on a more day-to-day basis is while doing things like quick scripting. On a more application level, while many frameworks handle under-the-hood parallelization like @mperron mentioned, I think that most small scale app developers (small websites etc) don't have to worry about parallelization themselves because it would not affect performance for the end user.


@rohan it's interesting how you talk about "small scale app developers (small websites etc)" and how they don't have to worry about parallelization. What's interesting is that you never know when a small scale app can explode into a big thing! I think its always better to build systems in a way that they might not initially be parallel, but can be parallelized with not too much effort over time, if need be.


Though talking about concurrency, I think this article clarifies why Moore's Law is not omnipotent, that is, 'The performance gains are going to be accomplished in fundamentally different ways for at least the next couple of processor generations. And most current applications will no longer benefit from the free ride without significant redesign.' to some extent:


@pkoenig10 It's kind of hard to say for "majority", but I think a lot of apps do. Maybe there are still a lot of programs that would have span similar to work; quite at the same time we still care a lot about single core performance, which is I think an important part of CPU performance testbenches.