wangfuli / T-HyperGNNs

Tensor-based hypergraph neural networks
5 stars 0 forks source link

the cora dataset hypergraph in cocitation is constructed? #1

Open zhangyu2234 opened 3 weeks ago

zhangyu2234 commented 3 weeks ago

Dear author, I would like to know how the hypergraph of the cora dataset in cocitation is constructed. If you can answer this question, it will be of great help to me.

wangfuli commented 3 weeks ago

Hello, thank you for your attention in our work. Please see the file for the sampling methods we used:https://drive.google.com/file/d/1Xd2QNGXKIpWxUcnpP9bVi_QuFCpU70WN/view?usp=sharing

It should run smoothly with the co-authorship and co-citation datasets from the public datasets in HyperGCN (https://github.com/malllabiisc/HyperGCN).

zhangyu2234 commented 2 weeks ago

Hello, thank you for your attention in our work. Please see the file for the sampling methods we used:https://drive.google.com/file/d/1Xd2QNGXKIpWxUcnpP9bVi_QuFCpU70WN/view?usp=sharing

It should run smoothly with the co-authorship and co-citation datasets from the public datasets in HyperGCN (https://github.com/malllabiisc/HyperGCN).

I can't express my gratitude enough!

zhangyu2234 commented 2 weeks ago

Dear author, when I use your default dataset House and your defined HyperRepresentation. Adjacency(), the maximum supported dimension of NumPy's ndarray is 32. In this (*self.A = np.zeros([self.N] self.M)** )code, an array with a dimension of 81 is created, which exceeds the limit of NumPy!

wangfuli commented 2 weeks ago

Dear author, when I use your default dataset House and your defined HyperRepresentation. Adjacency(), the maximum supported dimension of NumPy's ndarray is 32. In this (*self.A = np.zeros([self.N] self.M)** )code, an array with a dimension of 81 is created, which exceeds the limit of NumPy!

Hello, yes, the hypergraph tensor representations are quite expensive with a space complexity in O(N^M), which is not applicable to large dataset like the House dataset. In our experiments, the house dataset is only trained on T-MPHN, which does not require the direct construction of the adjacency tensor.

zhangyu2234 commented 2 weeks ago

Hello, yes, the hypergraph tensor representations are quite expensive with a space complexity in O(N^M), which is not applicable to large dataset like the House dataset. In our experiments, the house dataset is only trained on T-MPHN, which does not require the direct construction of the adjacency tensor.

Thank you so much for always patiently answering my questions!