Previous | Next --- Slide 32 of 79
Back to Lecture Thumbnails
cmusam

GPUs are limited to certain instructions such as floating-point operations, which can be executed extremely fast. To reiterate, to use GPU for scientific computations, we must use this given interface.

nmrrs

But isn't that only in the standard mode? I thought we don't need to use this interface in the compute mode that is now available on most modern GPUs

ojd

@cmusam This was true in the early days of GPGPU. The reason was that other than a few special purpose features, the GPU was almost entirely intended as a Graphics Processing Unit where the main calculations of interest deal with the reals. Therefore, less chip space was allocated for integer hardware (the calculations of the time likely wouldn't use them much anyway). It's also worth noting that until around 2008 when GPGPU compute started getting serious attention from NVIDIA/AMD, it was also common for GPUs to have different floating point precision at different stages in the pipeline ranging from 8 bit to 24 bits, with the programmer having little to no control over it happening.

In modern times, GPGPU compute is supported well enough by NVIDIA/AMD/Intel that this kind of sacrifice no longer makes sense. There's one minor hiccup in that NVIDIA has found a nice position by being the more graphics oriented-option and so their hardware tends to favor floating point. With the rise of interest in GPU machine learning and NVIDIA's efforts to make themselves the dominant player in this space, this will probably hold for a while now. Still, forcing data that's naturally an integer into a floating point in modern times is silly 99% of the time. Numerical computation is almost never going to be your bottleneck anyway.