Sometimes (especially with large graphs), we may have fixed bandwidth issues. In order to fix this, we can make use of cache locality so that we access parts of a graph repeatedly to prevent high bandwidth costs. Additionally, we could use multiple machines and make use of graph clustering. Finally, we can shrink the data or stream the graph computation process in which we grab a large part of the graph, use it, and put it back. I had a question about this: how is this streaming process different from utilizing cache locality?
Optimizing graph computations can be difficult because most graph algorithms are heavily bandwidth bound. For example in pagerank, there's only one operation per edge. In some cases, one option is to reorganize the computation to exploit locality. In other cases the problem is that we're disc bound because the graph can't fit on one machine as mentioned in this slide. To solve this issue the options are to compress and decompress the graph as needed or to use many different machines.
@makingthingsfast In utilizing cache locality we live under the assumption that when we load something into the cache then values in related storage locations are going to be frequently accessed by the future code. When we use the streaming process we know that what we are streaming into the cache is the data that we will be manipulating and therefore we tend to favor data that is recently been loaded into the cache and not data that has been referenced recently in the cache.