A single vertex can be connected to every other vertices in the graph. This can lead limit the amount of parallelism in a fully connected graph. Perhaps "edge consistency" can be relaxed to allow for reads?
418_touhenying
Is it that maybe read consistency and write consistency can also be considered separately?
enuitt
What would be a good method to gauge which consistency a program would want? I am assuming the programmer specifies the granularity to GraphLab based on his/her understanding of the program
yangwu
does it mean we divide the graph into disconnected subsets, and run pagerank within these sets in parallel? if so, when would we update nodes on boundaries? or do we just ignore those nodes in the iteration.
Renegade
I think this just provides more choices to synchronise updates and leaves it a decision for programmers. PageRank may not be suitable for loose consistency, but other algorithms that are less sensitive to stale data or race conditions can speed up significantly from relaxation of consistency.
A single vertex can be connected to every other vertices in the graph. This can lead limit the amount of parallelism in a fully connected graph. Perhaps "edge consistency" can be relaxed to allow for reads?
Is it that maybe read consistency and write consistency can also be considered separately?
What would be a good method to gauge which consistency a program would want? I am assuming the programmer specifies the granularity to GraphLab based on his/her understanding of the program
does it mean we divide the graph into disconnected subsets, and run pagerank within these sets in parallel? if so, when would we update nodes on boundaries? or do we just ignore those nodes in the iteration.
I think this just provides more choices to synchronise updates and leaves it a decision for programmers. PageRank may not be suitable for loose consistency, but other algorithms that are less sensitive to stale data or race conditions can speed up significantly from relaxation of consistency.