Previous | Next --- Slide 4 of 43
Back to Lecture Thumbnails
kayvonf

Question: In 15-213's web proxy assignment you gained experience writing concurrent programs using pthreads. Think about your motivation for programming with threads in that assignment. How was it different from the motivation to create multi-threaded programs in this class?

Hint: What is the difference between concurrent execution and parallel execution?

TomoA

One explanation that I've seen is that concurrency deals with multiple independent agents with lifetimes that may overlap, while parallelism deals with executing instructions simultaneously. If you have two threads running concurrently then you know that they may potentially be run at the same time, and your code should account for that possibility, but it is not guaranteed. If they are run in parallel, though, they will in fact be run at the same time, or as well as the scheduler can manage. Concurrency is focused around managing multiple executions somehow, and parallelism is focused on actually making the multiple executions happen efficiently.

The above is just my interpretation of Professor Bob Harper's two blog posts about this topic. If I am wrong please correct me. The posts can be found here: https://existentialtype.wordpress.com/2011/03/17/parallelism-is-not-concurrency/

https://existentialtype.wordpress.com/2014/04/09/parallelism-and-concurrency-revisited/

misaka-10032

Main motivation of thread in proxy lab is to prevent exclusive use of limited resource by a time-consuming or malicious request. Motivation of thread in this class is to make full use multi-core resource of modern processors, and make the program run faster. Thus, even on a single core machine, it's necessary to have a multi-threaded proxy, but it is less meaningful to have more threads than number of cores in this class, because that would lead to more overhead of context switching with little performance improvements.

CC

The proxy lab is on the concurrency side where we are dealing with many tasks at the same time, and sometimes we think concurrency as a synonym for multi-tasking, while in this course we deal with parallelism, where we may be doing many things at the same time for a task. Parallelism has hardware implications whereas concurrency makes sense even on a single-core machine. One good webpage distinguishing the two concepts is here.

Going back to the motivations of proxy lab and this course, the former is more about getting several independent tasks done at the same time, while the latter is improving efficiency of programs.

mallocanswer

@TomoA, hi, I couldn't quite understand how two threads run concurrently and may run at the same time. Could you give some examples about when two concurrent threads run at the same time? Because for me concurrency means threads are run in interleaved way. Thanks!

hofstee

@mallocanswer like mentioned in class, Concurrent Execution is more of an abstraction than Parallel Execution. Concurrent Execution can be implemented in several ways, such as interleaved threads or threads running in parallel. For example with the proxy server, if you had a dual-core CPU you could feasibly have two requests handled simultaneously - one for each core of the CPU.

arundhati

@mallocanswer: I think as Professor mentioned today, the threads we used in proxy implemented concurrency in way that the threads may or may not have executed simultaneously. For example, if it was executed on a single core (interleaving in time) vs on a multi-core processor.
Parallelism on the other hand implies that the threads will always execute simultaneously.

totofufu

The purpose of pthreads in proxylab was to make sure that we didn't need to wait for other processes to finish before our request started. For example, if my friend makes a request for 100 GB of data and I make a request for 10 MB of data after her, then with a single thread, I'd have to wait until my friend gets her 100 GB of data before my request even starts, which would take a really long time. With interleaved concurrency, I'd be able to get my completed request much faster because the thread would start on her request, concurrently complete my request, and then return back to her request. So interleaved concurrency specifically in proxylab would be useful if, for example, the first request we got for a webpage would be to load a huge video. With a single thread, we wouldn't be able to load any other part of the webpage until the video finished loading, but we could with interleaved concurrency. (I think that the pthreads in proxylab work in parallel if possible but interleaved concurrency otherwise.)

The reason we want to use true parallelism in this class is, I believe, because we care about performance. With interleaved concurrency, speedup is not necessary guaranteed. For example, if we try to add up a bunch of numbers and reduce to get a final sum in parallel, we'll actually get a speedup, but in this example, reducing using one thread vs. interleaved concurrency wouldn't give us any kind of speedup.

totofufu

Also, I really like this link that describes the difference between parallel, concurrent, and asynchronous programming.

temmie

@totofufu That was a very clear answer, thanks!

So if I understand correctly, the motivation in 213 was simply to reduce idle time during execution of a program that was not necessarily running on multiple cores (like one person preparing the filling of a pie while they wait for the crust to cook), while in 418 our motivation is to reduce total runtime of a program by explicitly dividing the work between cores where appropriate (like assigning two people to work independently on the filling and crust from the start).

vincom2

@TomoA took the harper links, so let me post this one instead: http://composition.al/blog/2014/11/24/yet-another-blog-post-about-how-parallelism-is-not-concurrency/

An example of parallelism where you don't really have to think about concurrency would be fork-join parallelism, I guess?

efficiens

I think pthreads were used so that if there are multiple instructions or requests then they don't have to be in a queue and wait for the requests before them to finish (which is concurrency). Parallelism on the other hand is much more about efficiency and performance than about handling idle requests. I think Parallelism also guarantees simultaneous behavior.

KnightsLanding

Yes it's different. Multi-threading in the Web Proxy Assignment aims to support multiple requests running in parallel because of each request is blocked when being fulfilled, so threads are being scheduled and not running at the same time. In this course, we aim for concurrent execution of threads.

RX

In building a proxy server, we want to have multiple threads to handle requests because network IO will block one thread where other thread can use the CPU cycles and do its job. So we are hiding IO latency by using threads for a proxy server, and each one thread is doing a single independent job. However in this class, we use threads in order to finish ONE huge job faster, by decomposing the one big job into small ones which can be run in parallel. The reason why we can speed up is not about the gap between IO and CPU, it is because we have more cores and the task is decomposable (parallelizable).

monkeyking

As I learned from CSAPP, a concurrent program is written as multiple concurrent flows while a parallel program is a concurrent program running on multiple processors. Thus, the set of parallel programs is a proper subset of the set of concurrent programs. Can I say that in the question, concurrent execution is more narrow and it refers to interleaved execution while parallel execution refers to simultaneous execution on multiple cores?

althalus

Is it true that if we have a concurrent program, we can only improve efficiency but not performance; but if we have a parallel program, it is possible to increase both efficiency and performance? Actually, I am confused about what the two terms mean since my understanding is that for something to be efficient means that it reduces idleness while performance is the speed of the program.

idawer

Motivation to create threads in proxylab is to consider all HTTP requests as independent tasks. This is the primary reason why they should be scheduled by kernel as separate entities considering there will be few requests which consume more time or sometimes bad. In this class, a big task is primarily broken down into multiple parts which are futher scheduled in threaded fashion so as to exploit parallelism in the form of multi-core execution. In Proxylab, all threads presented non-blocking view of execution where proxy was not blocked on single request (Non-blocking proxy yields better throughput as compared to blocking-proxy even on single-core machine). But a single big task if broken into multiple parts and executed as threads on single-core machine would not have yielded better performance.

eknight7

@monkeyking: I find your question confusing. If you claim that the set of parallel programs is a proper subset of the set of concurrent programs, then how is concurrent execution "more narrow"?

@athalus: As shown on lecture 1, slide 14, efficiency is not the same as performance (or speed-up). The definitions of the terms are in the slides, but to answer your question, it may or may not be possible to increase both efficiency and performance, it may depend on the hardware and your program task(s).