Previous | Next --- Slide 3 of 58
Back to Lecture Thumbnails
BryceToTheCore

I think the main idea of this slide is the concept of motivation.

I think that each of these technologies were motivated by a particular form of parallelism.

I will share some of my current views on the motivations for the technologies listed in the upper half of the slide.

SIMD

Motivated by data-parallel computations, those that involve many of the same operations being done near each other in the execution path of the code.

Multi-threading

I think Multi-threading was motivated by the desire to hide latency. By having multiple execution contexts on the same core, processors can do useful work while waiting for long latency events such as memory accesses. Warps on a GPU are a good example of this.

Multi-core

Task parallel execution paths. In contrast to SIMD, which can take advantage of instruction level parallelism, Multi-core executions was motivated by programs that have multiple independent functions running. A good example would be an interactive visual application that has a state update loop and a drawing loop. While the drawing requires state, it does not need to be synchronized with the updating of the state and only wishes to display the most current consistent state information that was available at the time of initiation. This is a good example of a task with independent tasks that differ, which multi-core can handle quite well.

Another feature of multi-core parallelism is that it allows tasks to run at their own pace, so if one task such as the display loop is slower than the rest of the tasks, then it does not not need to form a bottleneck in the execution of the faster tasks.

Multiple node

This form of parallelism might have been motivated by the desire to have true separation of logic mediated by message passing or the desire to scale programs to larger collections of resources.

Multiple server

I think that multiple server has roughly the same motivations as multiple node, except the separation of resources is even more defined and working sets that want to share resources should probably not be located on differing servers.

It is also important to remember that we can often save money and power by designing better algorithms and code than naively throwing more computing resources at a problem.