Previous | Next --- Slide 26 of 33
Back to Lecture Thumbnails
uhkiv

Summary of the next few slides: Another way to deal with data transfer being the bottleneck is to compress data so we can save bandwidth. This scheme has to be simple and fast for the reasons listed above. One way to compress is use the fact that nearby data probably has similar values. Hence, we select the minimum or the first value in the line as the base and record the others in the cache line by deltas (original value - base). However, the result will be obviously bad for nearby data that deviates a lot from each other. Hence, we use a two pass basing. We first compress with zero as the default base (which we do not have to store, but do not compress the value if zero is a bad base for the value. Now, in the second pass, out of those uncompressed values, we select the first one and use that as the base for all other uncompressed values.

cardiff

To clarify, the line about "more cache hits = fewer transfers" suggests that because the cache's effective capacity is increased and more data fits in the cache, there will be overall fewer cache misses and more cache hits as fewer cache items are evicted. This means that there will be fewer full transfers because the cache can be used more often. Ideally, this savings in transfer time and cost would offset the additional cost to compress and decompress the data.