Open hfr1tz3 opened 3 months ago
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
I'm just making a comment to link all collaborators to the PR: Marissa Masden: @mmasden Halley Fritze: @hfr1tz3
Additionally, for sake of time we restricted to the case of constructing a hypergraph using 1-dimensional filters. However, this implementation can be extended for higher dimensional filters to lift graph to a larger hypergraph (or simplicial complex if we construct the nerve of the pullback).
Hello @hfr1tz3! Thank you for your submission. As we near the end of the challenge, I am collecting participant info for the purpose of selecting and announcing winners. Please email me (or have one member of your team email me) at guillermo_bernardez@ucsb.edu so I can share access to the voting form. In your email, please include:
Before July 12, make sure that your submission respects all Submission Requirements laid out on the challenge page. Any submission that fails to meet this criteria will be automatically disqualified.
In this PR, we implement a lifting from graphs to hypergraphs via the "Mapper on Graphs" construction as depicted in Figure 30 [1], which can be enriched with the structure of a combinatorial complex. It operates in the following way:
1) Begin with graph $X$ and choice of filter function $g: X^{(0)}\to [a,b]$. 2) The algorithm creates $\mathcal{U}$, a cover of $[a,b]$ based on parameters
resolution
(number of intervals) andgain
(proportion overlap between neighboring intervals). 3) For $U\in \mathcal{U}$, the set of vertices $g^{-1}(U)$ induce a subgraph $X_U$ of $X$; together, these pullback sets cover $X^{(0)}$. 4) For each $U$, "cluster" $X_U$ into its connected components. Each connected component is a hyperedge in the lifted topology. 5) Additionally, each edge in $X^{1}$ is a hyperedge in the lifted topology. This could allow for the enrichment of the hypergraph with the structure of a combinatorial complex, if desired.The 1-skeleton of the nerve of the resulting cover would give the results of the classic Mapper algorithm for graph simplification with filter function $g$.
By default, the function $g$ is the first projection of the graph Laplacian embedding of the unweighted edge adjacency matrix of $X$, as this may be defined on all graphs and is known to capture topological information. However, we also have alternative filters that can be selected by the user (defined below). Some require that the graph $X$ be enriched with additional features. Users may define other filter functions $g$ dependent on what features their data contains in the form of a lambda function.
On the given Zinc dataset, we observe the following when applying default parameters.
Additionally, for sake of time we restricted to the case of constructing a hypergraph using 1-dimensional filters. However, this implementation can be extended for higher dimensional filters to lift graph to a larger hypergraph (or simplicial complex if we construct the nerve of the pullback).
The example filter functions $g$ which are implemented are the following:
"laplacian"
: Converts data to an undirected graph and then applies thetorch_geometric.transforms.AddLaplacianEigenvectorPE(k=1)
transform and projects onto the smallest nonzero eigenvector."svd"
: Applies thetorch_geometric.transforms.SVDFeatureReduction(out_channels=1)
transform to the node feature matrix (ie.torch_geometric.Data.data.x
) to project data to a 1-dimensional subspace."feature_pca"
: Appliestorch.pca_lowrank(q=1)
transform to node feature matrix (ie.torch_geometric.Data.data.x
) and then projects to the 1st principal component."position_pca"
: Appliestorch.pca_lowrank(q=1)
transform to node position matrix (ie.torch_geometric.Data.data.pos
) and then projects to the 1st principal component."feature_sum"
: Appliestorch.sum(dim=1)
to the node feature matrix in the graph (ie.torch_geometric.Data.data.x
)."position_sum"
: Appliestorch.sum(dim=1)
to the node position matrix in the graph (ie.torch_geometric.Data.data.pos
).You may also construct your own
filter_attr
andfilter_func
:"my_filter_attr"
:my_filter_func = lambda data : my_filter_func(data)
wheremy_filter_func(data)
outputs a(n_sample, 1)
Tensor.Additionally, when calling the transform, set
filter_attr = "my_filter_attr"
filter_func = my_filter_func
[1] Hajij, M., Zamzmi, G., Papamarkou, T., Miolane, N., Guzmán-Sáenz, A., Ramamurthy, K. N., et al. (2022). Topological deep learning: Going beyond graph data. arXiv preprint arXiv:2206.00606.
This is joint work between Halley Fritze and Marissa Masden.