Previous | Next --- Slide 24 of 69
Back to Lecture Thumbnails

One question I had during lecture was how does the Liszt compiler generate code that is optimal on different computer architectures? What I mean is if it generates C++ or some other type of code for multicore computers, how does it know that that code is optimal on any multicore environment? I spent a little time looking at one of their papers but was not quite sure I understood what's going on here. The answer may be that it's some built in functions of the compiler, but I was not sure what techniques the compiler uses, if that information is available.


"Optimal" is not the word I'd use since there's no proof of it being the best solution. What Liszt does do however, is use a good implementation for the machine at hand. For example, the implementation of the Liszt compiler will use parallelization technique X when compiling for a cluster of CPUs, and parallelization technique B when compiling for a GPU. This lecture described some of those techniques.

The point is that there's no "magic going on". The implementors of Liszt understood the domain of physical simulation well enough to know what approach is most desirable for different machine targets, and they coded that knowledge into the compiler and runtime system. But there certainly might be even better implementations out there.

ps. This was a good question. Interrupt me in lecture and ask!