ma-compbio / Hyper-SAGNN

hypergraph representation learning, graph neural network
MIT License
84 stars 21 forks source link

problem of training both models of classifier_model annd Randomwalk_Word2vec #12

Open intellectwood opened 11 months ago

intellectwood commented 11 months ago

Hello, I am very admiring your work.But I notice last line of main.py can train both model of classifier_model and Randomwalk_Word2vec but setting loss=(loss, 1.0), (loss2, 0.0) can only train model of classifier_model, right?

Then here is my question, setting loss=(loss, 1.0), (loss2, 1.0), code got error: image image

And I want to know whether only one model of classifier_model perform better or both model?

ruochiz commented 11 months ago

Hi, Thank you for your interest. Yeah, with the newer version of tensorflow, I stopped maintaining the code for the random walk part. The answer is that if you use the model with the adj mode, then just the classifier_model would work well. If it's random walk based, then that part of the loss would also be preferred. A quick fix, might just be changing line 132 of main from

example_emb = model.forward_u(examples)

to

example_embed, _ = model.forward_u(examples)

In addition, main_torch.py get rid of the dependencies of tensorflow 1.0 which would be slightly more up to date (while losing support for the random walk part of the model)

intellectwood commented 11 months ago

Still can't run, got this error, have you tried? image Then I am interested in your new version removing random walk, is it because with the adj mode can show best result or training both models(classifier_model annd Randomwalk_Word2vec) improves little?

ruochiz commented 11 months ago

the latter. training both models while offer some advantages when we did the benchmarking in the paper. But in some later applications of the model to other datasets, I found that training one model is good enough on its own.