Previous | Next --- Slide 29 of 81
Back to Lecture Thumbnails
planteurJMTLG

To answer the question: why can't we just use GPUs for data-parallel programs, and not only for graphical applications?

Qiaoyu

I think it is because GPU is not capable for executing instructions just like normal CPU, all it can do is given a part of graph it can compute each pixel in parallel. So if we want GPU to compute some non-graphic tasks for us we need to deceive it, and take every element in the array(the data we want to compute in parallel) as a pixel in the input. In that way, GPU thinks it is computing some kind of graphics, but the graphics is meaningless actually, and then we can extract each element from the graph as the result of our pre-defined function. And now we can use GPU to do something in parallel with pipeline other than graphics.

jmc

As @Qiaoyu explained, but at a higher level of abstraction: GPUs were first designed to perform a particular type of math on data with a particular structure (which was graphics rendering). Realizing that they provided fast (and still getting faster with more transistors) SPMD computation, programmers doing scientific computation began "hacking" GPUs by mapping their programs into a structure that could be run on a GPU. In 2004 Ian Buck created a robust abstraction of this hack by implementing the Brook stream programming language, which allows the programmer to write in terms of a stream abstraction while the code gets compiled into OpenGL commands to run on a GPU. From 2007 on, GPU makers such as NVIDIA started designing their GPUs for this generic, non-graphics-specific use by providing a "compute mode" (i.e. non-graphics mode) interface to their GPUs, of which the CUDA programming language we are learning is an example.

anonymous

In modern embedded system domain, there are many other cases utilizing CPU's control ability and other chips' calculation ability like DSP, FPGA.

shhhh

Why do we need to fool the GPU into thinking it's computing graphics? As far as I understand, the GPU is designed to be able to do many simple calculations in parallel, right? Why was the dummy graphics necessary?

jmc

@shhhh Because the GPU was originally NOT designed to do generic parallel computation. The hardware itself limited it to a particular pipeline designed for graphics.