Previous | Next --- Slide 38 of 43
Back to Lecture Thumbnails
CaptainBlueBear

This might be a bit off-topic but according to Wikipedia, GPUs have a highly parallel structure to allow for working with large blocks of visual data in parallel. With the increase in integrated GPUs (and even before that), are GPUs also used for non graphics/image computation or are they too inefficient?

jaguar

GPUs are commonly used in lots of tasks where parallelism is helpful or where GPUs are otherwise better designed for certain types of computations. They used to be fairly commonly used in bitcoin mining before specialized mining hardware became more common (Source: https://en.bitcoin.it/wiki/Why_a_GPU_mines_faster_than_a_CPU). The Folding@Home distributed computing project also has the option of utilizing the user's GPU (Source: https://folding.stanford.edu/home/faq/faq-high-performance/#ntoc6).

blairwaldorf

@CaptainBlueBear: Aside -- There are a lot of other applications for GPUs beyonds graphics images. For example, many machine learning algorithms have been rewritten to take advantage of the parallelism. Many problems in ML deal with many many characteristics that are best described in vector format with matrices. Operations on matrices can be efficiently done in parallel -- for example, matrix factorization. (See: https://people.mpi-inf.mpg.de/~rgemulla/publications/gemulla11dsgd.pdf)

jrgallag

The concept of using GPUs for applications where CPUs would traditionally be used is called "General-purpose computing on graphics processing units" (GPGPU).

GPUs are extremely parallel (a GTX 980 has 2048 cores!), and built for the regular throughput workloads common in graphics processing. They also are better at making sure that cores are used efficiently with little idle time. A lot of hardware on a GPU is special purpose, but there are programmable components.

However, GPU computing has disadvantages (which is why we still have CPUs). They don't handle branching well, as they rely on many threads running the same instructions. They also have very limited means to communicate between threads, and don't support interrupts or exceptions at all. The more limited feature set makes it more difficult (or impossible) to write code for GPUs in many applications.

For short, if you want to run operations which involve many independent threads without branching, use a GPU. If you need communication between threads or branching, stick with the CPU.

Sources: http://www.futurechips.org/chip-design-for-all/cpu-vs-gpgpu.html http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-980/specifications

tding

The way I understand this is that GPU cores are many less intelligent guys who can only do addition while CPU cores are really smart guys who can do multiplication as well. A small number of CPUs don't bother do such easy yet large tasks, so that they recruit many GPUs to do for them instead.

KyloRen

@tding are you basing your comments from the following paper by Prof.? :

https://graphics.stanford.edu/papers/gpumatrixmult/gpumatrixmult.pdf

Note that it was published in 2004, these two documents paint a different picture of matrix multiplication

2012: https://www.shodor.org/media/content/petascale/materials/UPModules/matrixMultiplication/moduleDocument.pdf

2013: https://developer.nvidia.com/sites/default/files/akamai/cuda/files/Misc/mygpu.pdf

I think NVIDIA and ATI both have come a long way in ensuring that GPUs are better for matrix calculations especially.

Trump card example: Deep Learning. All(fine, most) of the computations in a network are handled on the gpu. Titan X has a 12 GB(!) memory onboard to carry all this computational weight. A basic network has a speedup of at least 20x going from cpu to the optimized torch+cudnn libraries.

Though the picture isn't as rosy. But GPUs are becoming bigger and adding cores, and thats a good thing both for programmers and users.

FarmerScrub

One interesting application of GPUs is that they can perform protein folding simulations at very high speeds at a reasonable cost. There is an interesting distributed computing project called Folding@Home that the idle processing of the GPUs of people's personal computers to perform research on protein folding. It's supposedly useful for Alzheimer's, Huntington's ,and Parkinson's research.

Link