Previous | Next --- Slide 35 of 79
Back to Lecture Thumbnails

As far as I know, CUDA is an example of the shared address space model. Threads from different blocks share global memory. Threads within a block share "per-block shared memory". I also noticed CUDA provides Message Passing Interface (MPI). So it seems CUDA also supports message passing model.


An ISPC instance is analogous to a CUDA thread and a task is analogous to a block. All ISPC instances from the same task run on the same CPU core, and all threads from the same block run on the same GPU core as well. To maximize parallelism, we use multiple tasks for ISPC programming and multiple blocks for CUDA programming.


In the perspective of thread blocks, we partition the problem into them in the spirit of data-parallel fashion, as CUDA assumes thread blocks are independent (analogous to ISPC tasks). As for threads in a block, they'll run concurrently because they cooperate, in the spirit of SPMD shared address space programming(analogous to a ISPC gang).