Previous | Next --- Slide 55 of 78
Back to Lecture Thumbnails
Holladay

Can different warps be running different instruction streams? (Is that a question that even makes sense?)

bpr

@Holladay, it depends. Each warp is a different instruction stream that is independently scheduled from the other warps. However, warps are usually all from the same kernel launch, so they will have much in common with each other.

gogogo

What's the motivation behind the concept of a warp? I thought it was a little bit contrived.

EggyLv999

What's the mechanism for loading instructions into warps? Are the instructions stored somewhere on the GPU, or in main memory?

Split_Personality_Computer

@gogogo I think the idea behind warps is that threads inside a warp can communicate better between each other than threads in different warps. They may also have shared memory within each warp. This isn't taken advantage of in this example but if you were doing an operation where every thread had to contribute a number, imagine the difference between 1) every thread writing to memory and 2) every thread writing to a variable shared by the warp and then each warp writing to memory.

**Also remember that GPUs are very parallel and they use SIMD execution. I think it might also be the case that the reason each warp has 32 threads is because it's using 32-wide SIMD.