For some partitioning strategy, if each subgraph/component fits nicely on one machine (to some extent), we could have the mapper load that machine's subgraph into an in-memory graph system (e.g. Fulgrora or TinkerGraph -- or any Graph for that matter), do iterations in memory. Next, once a certain number of iterations have been done in memory and cross component communication is needed, then the reduce phase can propagate the information and write the entire graph back to HDFS. This way, you can get more iterations done in fewer MapReduce iterations.
This is like a hybrid between MapReduce/Pregel/Cassovary.
For some partitioning strategy, if each subgraph/component fits nicely on one machine (to some extent), we could have the mapper load that machine's subgraph into an in-memory graph system (e.g. Fulgrora or TinkerGraph -- or any Graph for that matter), do iterations in memory. Next, once a certain number of iterations have been done in memory and cross component communication is needed, then the reduce phase can propagate the information and write the entire graph back to HDFS. This way, you can get more iterations done in fewer MapReduce iterations.
This is like a hybrid between MapReduce/Pregel/Cassovary.