Previous | Next --- Slide 13 of 56
Back to Lecture Thumbnails
LeeK

"Successful" languages generally have 2 out of the 3 desirable traits. No programming language has been created that has all 3. DSLs give high performance and productivity but are specific to a certain domain so they lack completeness.

kayvonf

"No programming language has been created that has all 3."

Well, at least that's the claim (and opinion of many). Perhaps someone may disagree.

Xiao

Food for thought: Why is it so hard to achieve all 3? Is it because of our inability to create better programming languages? Or is it because of inherent design contrains created by the instruction-based programming model?
For example, in order to achieve productivity, we want programming languages that can most closely mimic the actual computation model we are working with, e.g. graphs, lists, high-order functions. However, at the machine level, everything is translated to an instruction based model. The machine is, most of the time, unaware of what it is computing: they are just a bunch of memory load/stores and arithmetic operations. This semantic gap has to be filled by the programming language and its compiler. Larger the semantic gap, the better productivity we can achieve, but the complexity of the compiler increases, and we lose performance during the translation process. On the other hand, if we keep the complexity constant, the compiler will generate good code, but we lose completeness, and so on. In other words, this triangle is the result of the discrepancy between our computation model and the machine's computation model.
However, raising the level of abstraction of the machine's computation model has not been very successful. This is the reasoning behind CISC (Complex Instruction Set Computers). However, recent years, CISC architectures has slowly lost the battle against ARM, SPARC and other RISC architectures.
So this leads to the question I asked at the very beginning... Maybe our current computation stack is inherently flawed. Maybe a different computation model and translation model could fulfill all of these three goals. But that is one huge unknown...

acappiello

I think that part of the answer to some of the questions raised above is because the type of parallel hardware we've been discussing is both relatively new and constantly changing.

As far as completeness and productivity goes, this was something that could be reasonably well addressed 10 years ago (and even further in the past) when mainstream computing consisted of single core processors. Complete programming languages have existed "forever" and even languages like Python have been around since 1991 source: Wikipedia.

On the other hand, there hasn't been the same level of interest in high performance computing in the parallel world for as long. For example, CUDA has only been around since 2007 source: Wikipedia. So, perhaps with more time we will be able to produce a language that masters all three categories. Until we can do that, it looks like one of the three needs to be sacrificed.

In my opinion, the biggest conflict comes between productivity and high performance. The less effort that the programmer has to put into making the high performance program means that the rest of the work has to be made up by the compiler/interpreter. DSLs are useful because this means that the compiler/interpreter only needs to focus on optimizing a small subset of tasks.