In summary, Liszt is essentially an abstraction in which the programmer specifies the task at hand and the compiler dynamically creates the implementation based on the available resources. The dynamic generation of the implementation creates some overhead so Liszt trades some performance for ease of generating well performing, portable code.
I may be missing something obvious, but how does the Liszt compiler figure out what to compile to? The diagram in the paper shows that it can cross-compile to generate code using MPI, pthreads, or CUDA, but I can't seem to figure out how it chooses.
Does the user specify what to use or does it somehow figure it out itself?
I guess the compiler would just survey to check the available resources.
By constraining the user to an abstract interface which hides implementation details like data structures and memory accesses, Liszt can perform an accurate analysis of the code and produce optimized results. Using a Liszt library with language like C, the user has too much power which would prevent the same results. For example, they could perform additional reads which would cause cache conflicts, and Liszt would have no control over this.
I was wondering if the Liszt compiler has a lot of overhead from figuring out the dependencies, then figuring out what to compile to and what the most efficient algorithm would be for this specific instance.