Previous | Next --- Slide 15 of 58
Back to Lecture Thumbnails
jmnash

I can understand why it would be really difficult to make a language which is both high-performance and productive. As mentioned in other slides, it seems to be mostly because efficiency means knowing the hardware and making extremely detailed and complicated code to extract everything out of the hardware. But it's not immediately clear to me why getting rid of completeness would fix that problem and make code portable between types of hardware.

One reason I can think of is that it could be because the compiler/interpreter will know exactly what the code is doing up to certain parameters and constants. So it will know the most efficient mappings for those problems. But that doesn't really explain it all to me. Do domain-specific languages have to detect whether the computer has SIMD, how many cores it has, what levels its caches are on, etc.? If so, why doesn't that slow it down?

yihuaf

The productivity of the domain specific languages comes from limiting the power of the language. In another word, if there is only a few things you can do with the language, then it is relatively easier to use. In addition, limiting the power of a language allows for more optimizations and therefore the performance gain. With limit power of the language, the implementation of framework or the interpreter can make more assumptions in optimizing the code.

LilWaynesFather

@jmnash say that you have a domain specific graphics language. Then the language knows to look for HW specifics that would be of benefit to graphics processing (things like SIMD, on-board gpus, dedicated texture units and other graphics-dedicated ASIC units). In a generic language, there are just too many variables to take into account when optimizing code according to HW.