I don't understand exactly why this method is hacky. Which part in this process is hacky? Could someone please explain? Thanks in advance.

CaptainBlueBear

This is hacky because we're tricking the GPU to do parallel computations for us by presenting them as part of the graphics pipeline. As we went over in the earlier slides, early GPUs were designed specifically to carry out each part of the graphics pipeline on inputs and weren't suitable for other computations. Here, to get the GPU to do computations it wasn't designed for, we're tricking it by presenting our computation as part of the graphics pipeline (as a shader function) and by presenting our input array as an image

ok

To add on the explanation, we can use the graphics pipeline to perform parallel non-graphics computations by treating each pixel in the image as an element in an array.

althalus

To review, the shader would be the function to be applied to a set of elements. The set of elements would be mapped to the pixels. This then allows the GPU to follow its normal graphics pipelining; which allows parallelization.

I don't understand exactly why this method is

hacky. Which part in this process is hacky? Could someone please explain? Thanks in advance.This is hacky because we're tricking the GPU to do parallel computations for us by presenting them as part of the graphics pipeline. As we went over in the earlier slides, early GPUs were designed specifically to carry out each part of the graphics pipeline on inputs and weren't suitable for other computations. Here, to get the GPU to do computations it wasn't designed for, we're tricking it by presenting our computation as part of the graphics pipeline (as a shader function) and by presenting our input array as an image

To add on the explanation, we can use the graphics pipeline to perform parallel non-graphics computations by treating each pixel in the image as an element in an array.

To review, the shader would be the function to be applied to a set of elements. The set of elements would be mapped to the pixels. This then allows the GPU to follow its normal graphics pipelining; which allows parallelization.