Open stevenxxiu opened 9 years ago
Factorising a matrix this large I think takes a very very long time, even with sparse methods!
Perhaps we can extract a subgraph on which to experiment?
Yes it is quite big. We have 4867136 vertices and 24004361 edges, the max on the paper has 1601787 vertices 8063026 edges. Some approaches also require computing the inverse, which includes the Katz score and page rank, however the elements of the inverse is enough. Can we compute those quickly?
[addendum] Since A A^-1 = I, we can restrict to a column of the inverse and solve for that, I wonder how long that will take.
Perhaps we can extract a subgraph on which to experiment?
We could use that to train parameters but I think the Katz score requires the inverse of the full matrix for the kaggle test set.
Tuning the parameters of this paper also requires us to split the data into training and dev sets. This also has the benefit that we can compute our scores locally, albeit on a reduce training set.
The Katz score reads from (I-bA)^{-1} - I, but we only need Katz scores for the test data, right? If we can produce a subgraph that only employs the test data maybe we could reduce the training matrix size. This would work better if the edges weren't randomly chosen.
Good idea, but there's that "7 degrees of separation" thing which may or may not apply. If it does then the subgraph is the entire graph. Worth a try.
This paper makes it easier to train previous algorithms. The paper doesn't discuss directed graphs explicitly but says it can be mapped to a bipartite graph (section 2). I suppose it implies that the 2 new vertices of a single vertex has an edge between them as well? Otherwise the original directed path wouldn't map to an undirected path. Any thoughts on this?
scipy provides sparse eigenvalue decomposition.