Previous | Next --- Slide 20 of 40
Back to Lecture Thumbnails
hulk

There is a project for MPI too in 15640 distributed systems. For that project python users had a good time but C programmers suffered. Guess this project would not be easy...

jezimmer

So it looks like ISPC, CUDA, and OpenMP are language add-ons to C/C++ (I had asked in class which language the course would be predominately taught in, and received these as an answer). The only add on that I've used before was Cilk Plus from 15-210, so it should be interesting to see how they approach the problem of parallelism differently.

kayvonf

@jezimmer: No need to worry about these details now, but while Cilk Plus and OpenMP extend C/C++ with new constructs, ISPC and CUDA should be thought of as separate, unique languages. They interoperate easily with C/C++ and adopt C-like syntax, but they are not extensions of the C language.

For those that want to get picky: The CUDA kernel call interface (the <<< >>> syntax) is a C/C++ extension. However, the CUDA language used to implement device-side kernels is certainly a different language.

arjunh

@hulk But C/C++ code is far more performant than Python! But don't worry, the assignment won't be bad and we'll try to provide some good working examples to help you along. I haven't used Python's MPI API, but I suspect that it wouldn't make things much easier than using C/C++

@jezimmer We'll be studying these languages throughout the semester (ISPC and CUDA will be covered in the next couple of weeks!). We'll also touch upon how MPI and OpenMP work in assignment 3 (the short answer is that MPI uses message passing, while OpenMP uses shared memory; we'll talk more about this very shortly too). We'll talk more about Cilk-Plus towards the end of the course with respect to how you'd implement the API (see this for a sneak-peak.

Understanding the differences in how these parallel programming paradigms work is one of the cornerstones of the class.