Closed DavideBuffelli closed 2 years ago
This is weird; does it work without the --min-lr
parameter? Maybe some deviation between README and code?
--model TopoGNN
leads to a model which is an instance of FiltrationGCNModel
, and FiltrationGCNModel
admits the following parameters:
https://github.com/BorgwardtLab/TOGL/blob/2b47e647ec722dee936f3c7be4c1b74e437d2f07/topognn/models.py#L367
together with the general ones here:
https://github.com/BorgwardtLab/TOGL/blob/2b47e647ec722dee936f3c7be4c1b74e437d2f07/topognn/train_model.py#L105
and the dataset ones:
https://github.com/BorgwardtLab/TOGL/blob/2b47e647ec722dee936f3c7be4c1b74e437d2f07/topognn/data_utils.py#L612
So --depth 3 --lr_patience 25 --min_lr 0.000001
are invalid parameters for TopoGNN
Yes, it seems that there is some deviation to the README. Does it work without these parameters?
The code is running without those parameters, I will now try also the other commands, and see if I get correct results. Thank you for your help!
Great! Could you send a brief patch for the README so that I can see what you changed? Then I'll merge it and update everything.
Hi, unfortunately I wasn't able to do much. I'll report below what I did:
I followed the TopoGNN
commands in the README and this is what I found:
1 - poetry run topognn/train_model.py --model TopoGNN --dataset ENZYMES --depth 3 --batch_size 20 --lr 0.0007 --lr_patience 25 --min_lr 0.000001
depth
, lr-patience
, and min_lr
are not valid parameters. By removing them the code works and I get an accuracy of 0.40
2 - poetry run topognn/train_model.py --model TopoGNN --dataset ENZYMES --fake True --depth 3 --batch_size 20 --lr 0.0007 --lr_patience 25 --min_lr 0.000001
depth
, fake
, lr-patience
, and min_lr
are not valid, so if I remove them I would end up with the same command as above
3 - poetry run topognn/train_model.py --model TopoGNN --dataset ENZYMES --depth 3 --batch_size 20 --lr 0.0007 --lr_patience 25 --min_lr 0.000001 --use_node_attributes False
depth
, lr-patience
, and min_lr
are not valid. If I remove them the code runs, but I get an accuracy of 0.13, which I don't think is correct.
4 - poetry run topognn/train_model.py --model TopoGNN --dataset ENZYMES --deepset True --depth 3 --batch_size 20 --lr 0.0007 --lr_patience 25 --min_lr 0.000001
deepset
, depth
, lr-patience
, and min_lr
are not valid, so again it would be the same as the first command
5 - poetry run topognn/train_model.py --model TopoGNN --dataset ENZYMES --fake True --deepset True --depth 3 --batch_size 20 --lr 0.0007 --lr_patience 25 --min_lr 0.000001
deepset
, fake
, depth
,lr-patience
, and min_lr
, so again it would be the same as in the first command if I remove them.
Could it be that it is necessary to use other models instead of TopoGNN
?
I just updated the README and removed some obsolete calls. Between the current version of the paper and the previous submission, there have been quite a few updates, leading to a divergence in descriptions. I'd suggest to use DD
for testing for now. Maybe @ExpectationMax or @edebrouwer could briefly chime in with updates for the documentation?
Hey @DavideBuffelli sorry for the issues you are facing. I will give this a more detailed look later today. I suspect we might have made a mistake when cleaning up the code prior to publication such that the readme and the code are not totally in line.
Generally the train_model.py
script is not the script used to train the models used in the paper. Instead we relied on a training approach similar to the GNN Benchmarking paper as indicated in the publication and is implemented in the train_model_gnn_benchmarks.py
script. Can you try to commands denoted in the readme and use this script?
Hi @ExpectationMax, thank you for your help. Unfortunately the situation doesn't change when using train_model_gnn_benchmarks.py
, as the input arguments depend on the --model
, and TopoGNN
does not support many of the arguments that are passed in your commands. Could it be that another model should be used? For example I was looking at LargerTopoGNN
and SimpleTopoGNN
, but even those do not support all the arguments that are passed.
Hey Davide, I'm not entirely sure what went wrong in the main repository. I suspect the following: We updated the readme in a previous submission, yet continued to work on the codebase to include revisions. The README was then pulled into the main repo, yet not the changes that occurred during development as there where merging issues. I suppose this is something we wanted to fix later and then forgot.
Sorry for the hastle! The code in the branch code_submission_iclr2022
should be functional -- at least I tested this code base prior to the submission of our paper.
We will have a look at the differences between the two branches and then merge them in the next days such that there is only one functioning version.
Again sorry for this mixup, hope its smooth sailing from here!
Following up on the merge suggestion—should we go through with that?
Merged the branch and updated dependencies (d0c986cf829ca6bbae1a23e5cdab1c99146503cd).
I tried replicating the results in your paper following your instructions, however it seems like the parameters are not correct. For example, when I run:
poetry run topognn/train_model.py --model TopoGNN --dataset ENZYMES --depth 3 --batch_size 20 --lr 0.0007 --lr_patience 25 --min_lr 0.000001
I get an error, as these are the arguments that are indicated as valid for the train_model.py script when running the above instruction: