Question: How would you compare the following programming abstractions:
a CUDA thread
a CUDA thread block
a pthread
ISPC program instance
ISPC task
This comment was marked helpful 0 times.
tianyih
To answer the question in the slide, I would say CUDA is both a data-parallel model and a shared address space model. It's data-parallel since the 'device' codes execute on multiple data like SPMD. And we know that thread blocks have some shared memory. From abstraction's perspective, the model provides '__shared__' to explicitly define shared data. And from the implementation details, we know that GTX480/680 has an L1 data cache acting as shared memory.
But is CUDA also a message passing model? Since we can use 'cudaMemcpy' between the host and the device, does it count as message passing?
Question: How would you compare the following programming abstractions:
This comment was marked helpful 0 times.
To answer the question in the slide, I would say CUDA is both a data-parallel model and a shared address space model. It's data-parallel since the 'device' codes execute on multiple data like SPMD. And we know that thread blocks have some shared memory. From abstraction's perspective, the model provides '__shared__' to explicitly define shared data. And from the implementation details, we know that GTX480/680 has an L1 data cache acting as shared memory. But is CUDA also a message passing model? Since we can use 'cudaMemcpy' between the host and the device, does it count as message passing?
This comment was marked helpful 0 times.