OpenMP is a declarative abstraction. The pragma specifies what should be executed in parallel, and the system chooses the implementation (or the user can tune it).
Another example of declarative abstraction is the launch of ISPC tasks. The specific mapping of the tasks to execution contexts is not specified by the programmer.
Persistent CUDA programming is an example of an imperative abstraction!
foreach in ISPC is declarative. It just says that gang should cooperatively perform the iterations, but doesn't specify the assignment (e.g. static interleaved)
@nemo Would you care to explain why is CUDA imperative?
@shhhh CUDA is imperative by design. What that means is that the people who made CUDA wanted to give programmers as much control as possible to be able to get large speedups. That means that you have to manually declare everything, which values get assigned to which threads, and how computations are being performed, as well as declaring when you need atomic operations (updating a counter etc.). There is no simple, parallelize this loop with a single command like in OpenMP or ISPC, which was the challenge in homework 2.
So yeah, CUDA is imperative because you have to explicitly write out what you want to do and you do not just get to declare that you want a loop to be run in parallel.
The invariants of cache coherency are declarative - the specifics of the broadcast and directory-based schemes we discussed are imperative.