Some other examples from class of declarative vs imperative abstractions include openMP's parallel-for and ISPC's foreach loop. These are declarative because the programmer is just saying that the for loop should be run in parallel (i.e. what needs to be done), not how the iterations should be distributed among threads or processes. If we actually created and assigned each thread to an iteration, such as in an interleaved or blocked way, this would be imperative since the programmer is deciding how the work should be done. Similarly, ISPC task launch is declarative because the programmer wants tasks to be created and run concurrently without defining a particular implementation, while a programmer actually writing SIMD instructions to do the work in parallel is imperative.
When working with code, the granularity in our understanding of external libraries and how different functions are implemented is important in knowing how to optimally use it. We normally only see the declarative abstraction of them and know how to use it. However, if we don't understand how it is done, you might run into problems later on.
I thought that this link was helpful in giving some examples of declarative vs imperative for folks like me who were still a little confused.
Just so that I understand, writing CUDA code (specifically, launching kernels) is imperative?
I think by the definition above, writing CUDA code is imperative.
@efficiens, @monkeykind. Launching kernels (or more precisely) launch CUDA thread blocks is certainly declarative. The semantics of a launch are: "Launch this many thread blocks, and scheduler them onto the GPU's cores however you wish." However, about every other aspect of CUDA's abstractions is imperative.