alga-hopf / dl-spectral-graph-partitioning

Deep learning and spectral embedding for graph partitioning
11 stars 1 forks source link

Question about model performance #1

Open zr-swu opened 1 year ago

zr-swu commented 1 year ago

Hello, I am a master's student from China. Thank you for exposing the code. Now I want to do some more in-depth research based on your article and finish my dissertation. But I can't reproduce the results in the paper. For example, in GradedL dataset I get the following results: Number of graphs: 23 Max nodes: 9526 Max edges: 56550 Median normalized cut: GAP: 0.0248 App. Spectral: 0.0226 METIS: 0.0256 Spectral: 0.0234 Median balance: GAP: 1.2574 App. Spectral: 1.8844 METIS: 1.0034 Spectral: 1.8974 Median cut: GAP: 108.0 App. Spectral: 96.0 METIS: 114.0 Spectral: 94.0 Median runtime: GAP: 0.2983 App. Spectral: 0.1151 METIS: 0.0227 Spectral: 0.1174 Graphs for which GAP fails: []

But the results in the paper are : image

I wonder if the test data used in the paper is different from that used in the public code. Can you give me some help?

alga-hopf commented 1 year ago

Hello! I see that you tested on Graded L graphs with at most 9526 nodes, while in the paper we tested on graphs with |V| <= 150K. When you run testing.py you might want to change --nmax in order to test the algorithms on bigger graphs. Hope this helps.

zr-swu commented 1 year ago

Thanks for your reply. Your reply helped me a lot .I will use bigger graphs to test the model. And I have another question about the SuiteSparse matrix. Since you mentioned in the Readme that in the paper you focus on matrices coming from "2D/3D discretizations" , I downloaded matrices of the kind labeled "2D/3D Problem" from SuitSparse, but these matrices resulted in a Loss value of Nan for training, I would like to know which matrices from SuitSparse were used in your training.

alga-hopf commented 1 year ago

I suggest to use the Java graphical interface to download the matrices (https://sparse.tamu.edu/interfaces). Note that every matrix needs to have pattern symmetry and numerical symmetry equal to 1. Note that there might be new matrices that have been added after we trained our models. In which module did you find this problem? The embedding or the partitioning one?