Previous | Next --- Slide 41 of 43
Back to Lecture Thumbnails
c0d3r

To reiterate what was said in memory, compression in the GPU doesn't affect the cache line (since data is decompressed into the cache). Rather, it reduces traffic over the bus.

bysreg

Just want to confirm, so if you have 4 x 4 image with only one channel, that means the resulting compressed image data (comp) is:

comp(0, 0) = original(0, 0), comp(0, 1) = original(0, 1) - original(0, 0) comp(1, 0) = original(1, 0) - original(0, 0) comp(i, j) other than above = original(i, j) - (original(0, 0) + icomp(0, 1) + jcomp(1, 0))

with comp(i, j) potentially stored smaller than 8 bits. does not that mean we need to store the largest bit count of the correction pixels?

ojd

Minor point I thought I should bring up.

Compression likes heavily structured data. The cache compression techniques in the prior slides relied on most data being a mix of integral data where most of the high bits weren't used, and pointers where most of the pointers are within a close range of each other. Compression on GPUs is much more aggressive because most of the VRAM gets used for framebuffers and textures where there is a lot of local structure. These compression mechanisms can degrade on textures with a lot of entropy, however.

For depth buffers, where the data is even more structured than data derived from natural images, the GPU can and has historically applied even more aggressive lossless compression techniques. This has historically had some implications for the cost of actually reading or modifying a depth buffer directly.

Here's a survey on depth buffer compression (from 2006, so it's somewhat outdated now): http://fileadmin.cs.lth.se/graphics/research/papers/depth2006/dbc.pdf

nmrrs

It's also worth noting that if the "compressed" version actually ends up being larger than the original data stored at each pixel, we can always just use the original version. This can happen if a region of an image does not have a smooth linear gradient.