Due: Monday September 26, 11:59PM EST
100 points total
Update: We now have an extra computing cluster to use. See this article for details.
Overview
In this assignment you will write a parallel renderer in CUDA that draws colored circles. While this renderer is very simple, parallelizing the renderer will require you to design and implement data structures that can be efficiently constructed and manipulated in parallel. This is a challenging assignment so you are advised to start early. Seriously, you are advised to start early. Good luck!
Environment Setup
This assignment requires an NVIDIA GPU with CUDA compute capability 2.0. The following lab machines have adequate GPUs:
- ghc28/31/32/34-46.ghc.andrew.cmu.edu contain NVIDIA GTX 1080 GPUs. Table G.1 in the CUDA C Programming Guide is a handy reference for the maximum number of CUDA threads per thread block, size of thread block, shared memory, etc. The GTX 1080 GPUs support CUDA compute capability 6.1.
- You may also use the
latedays
cluster, see here for details.
Note: the performance capabilities of these GPU's varies widely, meaning that your results will vary across these machines. However, feel free to run your code on any of the specified machines. We have distributed reference solution executables in the starter code directories to grade your code's performance.
To get started:
On the GHC machines:
- The NVIDIA CUDA C/C++ Compiler (NVCC) is located at
/usr/local/depot/cuda-7.5/bin/
, which you need to add to yourPATH
. To add/usr/local/depot/cuda-7.5/bin
to your path, addsetenv PATH /usr/local/depot/cuda-7.5/bin:${PATH}
to your~/.cshrc
file if usingcsh
orexport PATH=/usr/local/depot/cuda-7.5/bin:${PATH}
to your~/.bashrc
file if usingbash
. - The CUDA shared library will be loaded at runtime. It is located at
/usr/local/depot/cuda-7.5/lib64/
, which you need to add to yourLD_LIBRARY_PATH
. Addsetenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/usr/local/depot/cuda-7.5/lib64/
to your~/.cshrc
file if using csh orexport LD_LIBRARY_PATH=/usr/local/depot/cuda-7.5/lib64/:${LD_LIBRARY_PATH}
to your~/.bashrc
file if using bash.
On the latedays
cluster:
- The NVIDIA CUDA C/C++ Compiler (NVCC) is located at
/usr/local/cuda-7.5/bin/
, which you need to add to yourPATH
. To add/usr/local/cuda-7.5/bin
to your path, addsetenv PATH /usr/local/cuda-7.5/bin:${PATH}
to your~/.cshrc
file if usingcsh
orexport PATH=/usr/local/cuda-7.5/bin:${PATH}
to your~/.bashrc
file if usingbash
. The CUDA shared library will be loaded at runtime. It is located at
/usr/local/cuda-7.5/lib64/
, which you need to add to yourLD_LIBRARY_PATH
. Addsetenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/usr/local/cuda-7.5/lib64/
to your~/.cshrc
file if using csh orexport LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64/:${LD_LIBRARY_PATH}
to your~/.bashrc
file if using bash.Make sure you source (or reload) your RC file. If using bash, run
source ~/.bashrc
. If using csh, runsource ~/.cshrc
.- Similar to assignment 1, download the Assignment 2 starter code from the course Github using:
git clone https://github.com/cmu15418/assignment2
For any C++ questions (like what does the virtual keyword mean), the C++ Super-FAQ is a great resource that explains things in a way that's detailed yet easy to understand (unlike a lot of C++ resources), and was co-written by Bjarne Stroustrup, the creator of C++!
The CUDA C programmer's guide is an excellent reference for learning how to program in CUDA.
You can also find a large number of examples in the CUDA SDK /usr/local/depot/cuda-7.5/samples
. In addition, there are a wealth
of CUDA tutorials and SDK examples on the web (just Google!) and on the NVIDIA developer site. In particular, you may enjoy the free Udacity course Introduction to Parallel Programming in CUDA.
Part 1: CUDA Warm-Up 1: SAXPY (5 pts)
To gain a bit of practice writing CUDA programs your warm-up task is to re-implement the SAXPY function
from assignment 1 in CUDA. Starter code for this part of the assignment is located in the /saxpy
directory
of the assignment tarball.
Please finish off the implementation of SAXPY in the function saxpyCuda
in saxpy.cu
. You will need to
allocate device global memory arrays and copy the contents of the host input arrays X
, Y
, and result
into
CUDA device memory prior to performing the computation. After the CUDA computation is
complete, the result must be copied back into host memory. Please see the definition of cudaMemcpy
function
in Section 3.2.2 of the Programmer's Guide.
As part of your implementation, add timers around the CUDA kernel invocation in saxpyCuda
. Your
additional timing measurement should not include the time to transfer data to and from device memory
(just the time to execute the computation). Note that CUDA kernel's execution on the GPU is asynchronous with the main
application thread running on the CPU. Therefore, you will want to place a call to cudaThreadSynchronize
following the
kernel call to wait for completion of all CUDA work. This call to cudaThreadSynchronize
should return before stopping the timer.
Question 1. What performance do you observe compared to the sequential CPU-based implementation of SAXPY (recall program 5 from Assignment 1)? Compare and explain the difference between the results provided by two sets of timers (the timer you added and the timer that was already in the provided starter code). Are the bandwidth values observed roughly consistent with the reported bandwidths available to the different components of the machine? (Hint: You should use the web to track down the memory bandwidth of an NVIDIA GTX 480 GPU, and the maximum transfer speed of the computer's PCIe-x16 bus. It's PCIe 2.0.)
Part 2: CUDA Warm-Up 2: Parallel Prefix-Sum (10 pts)
Now that you're familiar with the basic structure and layout of CUDA programs,
as a second exercise you are asked to come up with parallel implementation of the function find_repeats
which, given a list of integers A
, returns a list of
all indices i
for which A[i] == A[i+1]
.
For example, given the array {1,2,2,1,1,1,3,5,3,3}
, your program should output the array {1,3,4,8}
.
Exclusive Prefix Sum
We want you to implement find_repeats
by first implementing parallel exclusive prefix-sum operation (which you may remember as scan
from 15-210).
Exlusive prefix sum takes an array A
and produces a new array output
that has, at each index i
, the sum of all elements up to but not including A[i]
. For example, given the array A={1,4,6,8,2}
, the output of exclusive prefix sum output={0,1,5,11,19}
.
A recursive implementation should be familiar to you from 15-210. As a review (or for those that did not take 15-210), the following code is a C implementation of a work-efficient, parallel implementation of scan. In addition, details on prefix-sum (and its more general relative, scan
) can be found in Chapter 6 of the 15-210 lecture notes. Note: Some of you may wish to skip the following recursive implementation and jump to the iterative version below.
void exclusive_scan_recursive(int* start, int* end, int* output, int* scratch) { int N = end - start; if (N == 0) return; else if (N == 1) { output[0] = 0; return; } // sum pairs in parallel. for (int i = 0; i < N/2; i++) output[i] = start[2*i] + start[2*i+1]; // prefix sum on the compacted array. exclusive_scan_recursive(output, output + N/2, scratch, scratch + (N/2)); // finally, update the odd values in parallel. for (int i = 0; i < N; i++) { output[i] = scratch[i/2]; if (i % 2) output[i] += start[i-1]; } }
While the above code expresses our intent well, it is not particularly amenable to a CUDA implementation. We can instead express the algorithm in an iterative manner. The following "C-like" code is an iterative version of scan. We use parallel_for
to indicate potentially parallel loops. You might also want to take a look at Kayvon's notes on exclusive scan.
void exclusive_scan_iterative(int* start, int* end, int* output) { int N = end - start; memmove(output, start, N*sizeof(int)); // upsweep phase. for (int twod = 1; twod < N; twod*=2) { int twod1 = twod*2; parallel_for (int i = 0; i < N; i += twod1) { output[i+twod1-1] += output[i+twod-1]; } } output[N-1] = 0; // downsweep phase. for (int twod = N/2; twod >= 1; twod /= 2) { int twod1 = twod*2; parallel_for (int i = 0; i < N; i += twod1) { int t = output[i+twod-1]; output[i+twod-1] = output[i+twod1-1]; output[i+twod1-1] += t; // change twod1 to twod to reverse prefix sum. } } }
You are welcome to use this general algorithm to implement a version of parallel prefix sum in CUDA.
You must implement exclusive_scan
function in scan/scan.cu
. Your implementation will
consist of both host and device code. The implementation will require multiple kernel launches.
Note: The reference scan implemenation above assumes that the input array's length (N
) is a power of 2. In the cudaScan
function, we solve this problem by rounding the input array length to the next power of 2 when allocating the corresponding buffers on the GPU. However, we only copy back N
elements from the GPU buffer back to the CPU buffer. This fact should simplify your CUDA implementation.
Implementing "Find Repeats" Using Prefix Sum
Once you have written exclusive_scan
, you should implement the function find_repeats
in scan/scan.cu
.
This will involve writing more device code, in addition to one or more calls to exclusive_scan
. Your code
should write the list of repeated indicies into the provided output pointer (in device memory), and then return the
size of the output list.
When using your exclusive_scan
implementation, remember that the contents of the start
array are copied over to the output
array. Also, the arrays passed to exclusive_scan
are assumed to be in device
memory.
Grading: We will test your code for correctness and performance on random input arrays.
For reference, a scan score table is provided below, showing the performance of a simple CUDA implementation on a Gates machine with a GTX 480. To check the correctness and performance score of your scan
and find_repeats
implementation, run make check_scan
and make check_find_repeats
respectively. Doing so will produce a reference table like so; your score is based solely on the performance of your code. In order to get full credit, your code must perform within 20% of the provided reference solution.
------------------------- Scan Score Table: ------------------------- ------------------------------------------------------------------------- | Element Count | Fast Time | Your Time | Score | ------------------------------------------------------------------------- | 10000 | 0.387 | 0.010 (F) | 0 | | 100000 | 0.770 | 0.070 (F) | 0 | | 1000000 | 1.771 | 0.167 (F) | 0 | | 2000000 | 2.799 | 0.150 (F) | 0 | ------------------------------------------------------------------------- | | Total score: | 0/5 | -------------------------------------------------------------------------
Test Harness: By default, the test harness runs on a pseudo-randomly generated array that is the same every time
the program is run, in order to aid in debugging. You can pass the argument -i random
to run on a random array - we
will do this when grading. We encourage you to come up with alternate inputs to your program to help you evaluate it.
You can also use the -n <size>
option to change the length of the input array.
The argument --thrust
will use the Thrust Library's implementation of exclusive scan. Up to two points of extra credit for anyone that can create an implementation is competitive with Thrust.
Part 3: A Simple Circle Renderer (85 pts)
Now for the real show!
The directory /render
of the assignment starter code contains an implementation of renderer that draws colored
circles. Build the code, and run the render with the following command line: ./render rgb
. You will see
an image of three circles appear on screen ('q' closes the window). Now run the renderer with the command
line ./render snow
. You should see an animation of falling snow.
The assignment starter code contains two versions of the renderer: a sequential, single-threaded C++
reference implementation, implemented in refRenderer.cpp
, and an incorrect parallel CUDA implementation in
cudaRenderer.cu
.
Renderer Overview
We encourage you to familiarize yourself with the structure of the renderer codebase by inspecting the reference
implementation in refRenderer.cpp
. The method setup
is called prior to rendering the first frame. In your CUDA-accelerated
renderer, this method will likely contain all your renderer initialization code (allocating buffers, etc). render
is called each frame and is responsible for drawing all circles into the output image. The other main function of
the renderer, advanceAnimation
, is also invoked once per frame. It updates circle positions and velocities.
You will not need to modify advanceAnimation
in this assignment.
The renderer accepts an array of circles (3D position, velocity, radius, color) as input. The basic sequential algorithm for rendering each frame is:
Clear image
for each circle
update position and velocity
for each circle
compute screen bounding box
for all pixels in bounding box
compute pixel center point
if center point is within the circle
compute color of circle at point
blend contribution of circle into image for this pixel
Figure 2 illustrates the basic algorithm for computing circle-pixel coverage using point-in-circle tests. Notice that a circle contributes color to an output pixel only if the pixel's center lies within the circle.
An important detail of the renderer is that it renders semi-transparent circles. Therefore, the color of any one pixel is not the color of a single circle, but the result of blending the contributions of all the semi-transparent circles overlapping the pixel (note the "blend contribution" part of the pseudocode above). The renderer represents the color of a circle via a 4-tuple of red (R), green (G), blue (B), and opacity (alpha) values (RGBA). Alpha = 1 corresponds to a fully opaque circle. Alpha = 0 corresponds to a fully transparent circle. To draw a semi-transparent circle with color (C_r, C_g, C_b, C_alpha)
on top of a pixel with color (P_r, P_g, P_b)
, the renderer uses the following math:
result_r = C_alpha * C_r + (1.0 - C_alpha) * P_r result_g = C_alpha * C_g + (1.0 - C_alpha) * P_g result_b = C_alpha * C_b + (1.0 - C_alpha) * P_b
Notice that composition is not commutative (object X over Y does not look the same as object Y over X), so it's important that the render draw circles in a manner that follows the order they are provided by the application. (You can assume the application provides the circles in depth order.) For example, consider the two images below. In the image on the left, the circles are drawn in the correct order. In the image on the right, the circles are drawn out of order.
CUDA Renderer
After familiarizing yourself with the circle rendering algorithm as implemented in the reference code, now
study the CUDA implementation of the renderer provided in cudaRenderer.cu
. You can run the CUDA
implementation of the renderer using the --renderer
cuda program option.
The provided CUDA implementation parallelizes computation across all input circles, assigning one circle to each CUDA thread. While this CUDA implementation is a complete implementation of the mathematics of a circle renderer, it contains several major errors that you will fix in this assignment. Specifically: the current implementation does not ensure image update is an atomic operation and it does not preserve the required order of image updates (the ordering requirement will be described below).
Renderer Requirements
Your parallel CUDA renderer implementation must maintain two invariants that are preserved trivially in the sequential implementation.
- Atomicity: All image update operations must be atomic. The critical region includes reading the four 32-bit floating-point values (the pixel's rgba color), blending the contribution of the current circle with the current image value, and then writing the pixel's color back to memory.
- Order: Your renderer must perform updates to an image pixel in circle input order. That is, if circle 1 and circle 2 both contribute to pixel P, any image updates to P due to circle 1 must be applied to the image before updates to P due to circle 2. As discussed above, preserving the ordering requirement allows for correct rendering of transparent circles. (It has a number of other benefits for graphics systems. If curious, talk to Kayvon.) A key observation is that the definition of order only specifies the order of updates to the same pixel. Thus, as shown in Figure 4 below, there are no ordering requirements between circles that do not contribute to the same pixel. These circles can be processed independently.
Since the provided CUDA implementation does not satisfy either of these requirements, the result of not correctly respecting order or atomicity can be seen by running the CUDA renderer implementation on the rgb and circles scenes. You will see horizontal streaks through the resulting images, as shown in Figure 5 below. These streaks will change with each frame.
What You Need To Do
Your job is to write the fastest, correct CUDA renderer implementation you can. You may take any approach you see fit, but your renderer must adhere to the atomicity and order requirements specified above. A solution that does not meet both requirements will be given no more than 10 points on part 2 of the assignment. We have already given you such a solution!
A good place to start would be to read through cudaRenderer.cu
and convince yourself that it does not meet the correctness requirement. To visually see the effect of violation of above two requirements, compile the program with make
. Then run ./render -r cuda rgb
which should display the three circles image. Compare this image with the image generated by sequential code by running ./render rgb
.
Following are some of the options to ./render
:
-b --bench START:END Benchmark mode, do not create display. Time frames from START to END -c --check Runs sequential and cuda versions and checks correctness of cuda code -f --file FILENAME Dump frames in benchmark mode (FILENAME_xxxx.ppm) -r --renderer WHICH Select renderer: WHICH=ref or cuda -s --size INT Make rendered imagex pixels -? --help Prints information about switches mentioned here.
Checker code: To detect correctness of the program, render
has a convenient --check
option. This option runs the sequential version of the reference CPU renderer along with your CUDA renderer and then compares the resulting images to ensure correctness. The time taken by your CUDA renderer implementation is also printed.
We provide are total of five circle datasets you will be graded on. However, in order to receive full credit, your code must pass all of our test-benches. To check the correctness and performance score of your code, run make check
in the /render
directory. If you run it on the starter code, the program will print a table like the following, along with the results of our entire test set:
------------ Score table: ------------ ------------------------------------------------------------------------- | Scene Name | Fast Time (Tf) | Your Time (T) | Score | ------------------------------------------------------------------------- | rgb | 0.2615 | 137.9401 (F) | 0 | | rand10k | 10.1625 | 107.9893 (F) | 0 | | rand100k | 101.1337 | 1084.3314 (F) | 0 | | pattern | 0.9361 | 4.2878 (F) | 0 | | snowsingle | 59.4531 | 19.1290 (F) | 0 | ------------------------------------------------------------------------- | | Total score: | 0/65 | -------------------------------------------------------------------------
Note: on some runs, you may receive credit for some of these scenes, since the provided renderer's runtime is non-deterministic. This doesn't change the fact that the current CUDA renderer is incorrect.
"Fast time" is the performance of a good solution on your current machine (in the provided render_ref
executable). "Your time" is the performance of your current CUDA renderer solution. Your grade will depend on the performance of your implementation compared to these reference implementations (see Grading Guidelines).
Along with your code, we would like you to hand in a clear, high-level description of how your implementation works as well as a brief description of how you arrived at this solution. Specifically address approaches you tried along the way, and how you went about determining how to optimize your code (For example, what measurements did you perform to guide your optimization efforts?).
Aspects of your work that you should mention in the write-up include:
- Include both partners names and andrew id's at the top of your write-up.
- Replicate the score table generated for your solution and specify which machine you ran your code on.
- Describe how you decomposed the problem and how you assigned work to CUDA thread blocks and threads (and maybe even warps).
- Describe where synchronization occurs in your solution.
- What, if any, steps did you take to reduce communication requirements (e.g., synchronization or main memory bandwidth requirements)?
- Briefly describe how you arrived at your final solution. What other approaches did you try along the way. What was wrong with them?
Grading Guidelines
- The write-up for the assignment is worth 20 points.
Your implementation is worth 65 points. These are equally divided into 13 points per scene as follows:
- 2 correctness points per scene.
- 11 performance points per scene (only obtainable if the solution is correct). Your performance will be graded with respect to the performance of a provided benchmark reference enderer, Tref:
- No performance points will be given for solutions having time (T) an order of 10 times the magnitude of Tref.
- Full performance points will be given for solutions within 20% of the optimized solution ( T < 1.20 * Tref )
- For other values of T (for 1.20 Tref <= T < 10 * Tref), your performance score on a scale 1 to 11 will be calculated as:
11 * T_ref / T
.
Up to five points extra credit (instructor discretion) for solutions that achieve significantly greater performance than required. Your write up must clearly explain your approach thoroughly.
- Up to five points extra credit (instructor discretion) for a high-quality parallel CPU-only renderer implementation that achieves good utilization of all cores and SIMD vector units of the cores. Feel free to use any tools at your disposal (e.g., SIMD intrinsics, ISPC, pthreads). To receive credit you should analyze the performance of your GPU and CPU-based solutions and discuss the reasons for differences in implementation choices made.
Assignment Tips and Hints
Below are a set of tips and hints compiled from previous years. Note that there are various ways to implement your renderer and not all hints may apply to your approach.
- To facilitate remote development and benchmarking, we have created a
--benchmark
option to the render program. This mode does not open a display, and instead runs the renderer for the specified number of frames. - When in benchmark mode, the
--file
option sets the base file name for PPM images created at each frame. Created files are basenamexxxx.ppm
. No PPM files are created if the--file
option is not used. - There are two potential axes of parallelism in this assignment. One axis is parallelism across pixels another is parallelism across circles (provided the ordering requirement is respected for overlapping circles).
- The prefix-sum operation provided in
exclusiveScan.cu_inl
may be valuable to you on this assignment (not all solutions may choose to use it). See the simple description of a prefix-sum here. We have provided an implementation of an exclusive prefix-sum on a power-of-two-sized array in shared memory. The provided code does not work on non-power-of-two inputs and IT ALSO REQUIRES THAT THE NUMBER OF THREADS IN THE THREAD BLOCK BE THE SIZE OF THE ARRAY. - You are allowed to use the Thrust library in your implementation if you so choose. Thrust is not necessary to achieve the performance of the optimized CUDA reference implementations.
- Is there data reuse in the renderer? What can be done to exploit this reuse?
- The circle-intersects-box tests provided to you in circleBoxTest.cu_inl are your friend.
- How will you ensure atomicity of image update since there is no CUDA language primitive that performs the logic of the image update operation atomically? Constructing a lock out of global memory atomic operations is one solution, but keep in mind that even if your image update is atomic, the updates must be performed in the required order. We suggest that you think about ensuring order in your parallel solution first, and only then consider the atomicity problem (if it still exists at all) in your solution.
- If you are having difficulty debugging your CUDA code, you can use
printf
directly from device code if you use a sufficiently new GPU and CUDA library: see this brief guide on how to print from CUDA. - If you find yourself with free time, have fun making your own scenes!
Catching CUDA Errors
By default, if you access an array out of bounds, allocate too much memory, or otherwise cause an error, CUDA won't normally inform you; instead it will just fail silently and return an error code. You can use the following macro (feel free to modify it) to wrap CUDA calls:
#define DEBUG #ifdef DEBUG #define cudaCheckError(ans) { cudaAssert((ans), __FILE__, __LINE__); } inline void cudaAssert(cudaError_t code, const char *file, int line, bool abort=true) { if (code != cudaSuccess) { fprintf(stderr, "CUDA Error: %s at %s:%d\n", cudaGetErrorString(code), file, line); if (abort) exit(code); } } #else #define cudaCheckError(ans) ans #endif
Note that you can undefine DEBUG to disable error checking once your code is correct for improved performance.
You can then wrap CUDA API calls to process their returned errors as such:
cudaCheckError( cudaMalloc(&a, size*sizeof(int)) );
Note that you can't wrap kernel launches directly. Instead, their errors will be caught on the next CUDA call you wrap:
kernel<<<1,1>>>(a); // suppose kernel causes an error! cudaCheckError( cudaDeviceSynchronize() ); // error is printed on this line
All CUDA API functions, cudaDeviceSynchronize
, cudaMemcpy
, cudaMemset
, and so on can be wrapped.
IMPORTANT: if a CUDA function error'd previously, but wasn't caught, that error will show up in the next error check, even if that wraps a different function. For example:
... line 742: cudaMalloc(&a, -1); // executes, then continues line 743: cudaCheckError(cudaMemcpy(a,b)); // prints "CUDA Error: out of memory at cudaRenderer.cu:743" ...
Therefore, while debugging, it's recommended that you wrap all CUDA API calls (at least in code that you wrote).
(Credit: adapted from this Stack Overflow post)
3.4 Hand-in Instructions
Please submit your work using Autolab.
- Please submit your writeup as the file
writeup.pdf
. - Please submit your code under the folder
code
. Just submit your full assignment 2 source tree. To keep submission sizes small, please do amake clean
in the program directories prior to creating the archive, and remove any residual output images, etc. Before submitting the source files, make sure that all code is compilable and runnable! We should be able to simply make, then execute your programs in/saxpy
,/scan
, and/render
without manual intervention. - If you're working with a partner, form a group on Autolab. Do this before submitting your assignment. One submission per group is sufficient.
Our grading scripts will rerun the checker code allowing us to verify your score matches what you submitted in the writeup.pdf
. We might also try to run your code on other datasets to further examine its correctness.