Previous | Next --- Slide 4 of 35
Back to Lecture Thumbnails
kayvonf

Question: How might lazy evaluation of an array math library might allow for the library implementation to implement optimizations such as the one performed manually by the programmer here?

grose

@kayvonf Suppose, for example, you wanted to map the operations of squaring and adding 2 to each number in the array, e.g. array.map(x => x*x).map(x => x+2)

If we evaluate this lazily, then we essentially have a list of all the operations we want to perform, and we can functionally compose the operations before mapping them.

oulgen

This is already an optimization done in haskell since haskell is inherently lazy. Similar to grose's explanation, if you want to be doing multiple operations over the same data, then the compiler or the run time (depending on the operation) can recognize this and compose them together in order decrease the amount of time the data is retrieved and put back to memory. This article kind of touches on this issue but take it with a mountain of salt http://www.randomhacks.net/2007/02/10/map-fusion-and-haskell-performance/

rojo

Haskell is a great example as mentioned by @oulgen. In fact it takes it to next level. For example a command like

take 10 [5,10...]

is used to get first 10 numbers of the infinite series 5,10,15,20....

Since Haskell is lazy it will not evaluate the infinite series until it sees what the user requires, which in this case is first 10 numbers in the series. Hence it will only evaluate for first 10 numbers and not more. This would be impossible if Haskell did not do lazy evaluation.

sanchuah

Spark uses lazy evaluation strategy, too.

zhiyuany

There is another disadvantage of the first implementation. It makes compiler unable to do the loop pipelining optimization (unless add and mul functions are inlined).