Previous | Next --- Slide 42 of 46
Back to Lecture Thumbnails
machine6

If a 20 core GPU is much, much better at performing parallel computation than a 72 core 'coprocessor', why are we still using CPUs? Why not completely switch to GPUs as the main processing unit in our computing devices?

BestBunny

@machine6 As pointed out here: http://www.nvidia.com/object/what-is-gpu-computing.html, CPU's are optimized for sequential serial processing. Therefore, although GPU's may be very beneficial for large scale use such as in expensive industry tasks, day-to-day tasks performed by the typical user of a computing device may often be faster/less-expensive when performed sequentially rather than in parallel (For more information, see work efficiency: http://www.cs.cmu.edu/~15210/pasl.html#ch::work-efficiency).

Master

@machine6 I don't have a deep understanding of GPUs, but I searched the Internet and try to summarize the differences myself here: CPU stands for Central Processing Unit and GPU stands for Graphics Processing Unit. Both two have buses, caches and ALUs.The major difference is that CPU has limited cores, and each core has large caches and multiple ALUs with branch prediction hardware; GPU has much more cores, and the cores have much smaller caches with lesser ALUs. From these architectural differences, we can see that CPU is latency-oriented design while GPU is throughput-oriented design.

Back to your question, GPU has a SIMD-based architecture, making it especially suitable for compute-intensive tasks such as rendering in graphics. However, many operations are not compute-intensive, nor can be written in SIMD instructions, but involve a lot of logic. These tasks can't utilize GPU's computing capabilities, and may be constrained by its small caches and lack of branch-prediction. Therefore, CPUs and GPUs have different roles and are suitable in different situations.

Of course, GPUs are being utilized in areas other than graphics today, such as deep learning.

Metalbird

@machine6, as the other two posters above have already mentioned, the difference comes down into what your application is. My favorite comparison of the two is thinking about the CPU like a high performance car that is able to comfortably get from point A to point B very quickly, while a GPU is more like a fleet of pizza delivery scooters. If your application is delivering pizza to a lot of customers, then the fleet of delivery scooters will probably be more advantageous as you can achieve very high throughput compared to a single car. However, if your application is racing, or almost anything else besides delivering pizza you would much rather have a high performance car than a fleet of scooters. So again it's application specific whether you want to use a GPU or a CPU, graphics, dense linear algebra, and deep learning are three applications where GPUs typically excel, and there are many others.

CSGuy

I was just wanted to point out that I think it's interesting/a happy accident that innovation in these types of GPUs are fueled by the value of a relatively lay market (PC games) while also being useful for more niche academic uses. A lot of fields aren't as lucky.

elephanatic

@Metalbird I really like that comparison. We saw later this week (lecture 4) that the maximum speedup of a program is insignificant for a large number of processors (and thus does not justify the cost of having so many more) if even 5% of the work is sequential. So, as mentioned above, typical day-to-day small tasks are perhaps better left to reliable CPUs until everyone learns to write good parallel programs.

hzxa21

I also want to add two points here. The higher throughput provided by GPU compared to CPU also comes by integrating the GPU memory with larger bandwidth. (10Gbs for GTX 1080). And the tradeoff to have this bandwidth-oriented design is typically in GPU we need a more powerful fan to cool it down.