HennyJie / IBGNN

MICCAI 2022 (Oral): Interpretable Graph Neural Networks for Connectome-Based Brain Disorder Analysis
53 stars 7 forks source link

Training parameters #2

Closed BobrG closed 1 year ago

BobrG commented 1 year ago

Hi! Thank you for your work! I have a question regarding model parameters -- in paper you say that parameters were tuned with AutoML toolkit and do I understand correctly that these parameters are assigned as default values in arguments of main_explainer.py? I don't have access to supplementary material of your paper to check this out.

Best,

HennyJie commented 1 year ago

Not exactly. If you set 'args.enable_nni', then the whole framework can be tuned directly using the NNI framework. What you need is an additional config file that set the range of each parameter that you want to try, and the AutoML toolkit can iterate over their combination smartly. You can refer to the document of NNI here (https://github.com/microsoft/nni) for more details on how to set up the config file.

BobrG commented 1 year ago

@HennyJie thank you for answer! Do I understand correctly that in order to reproduce your results on, say, PPMI, then I need to get this dataset prepared according to your description and then run training with enable_nni, get best parameters and work with them?

BobrG commented 1 year ago

Also, I still think that it would be better if you could share either the best parameters of your model or the range that you used for AutoML tunning, just wanna reproduce some of the results from your paper :)

BobrG commented 1 year ago

Btw u have a bug in utils/modified_args.py in line 11 and due to this bug learning rate will be always set to zero as it is converted to int for nni tunning ^^

DDVD233 commented 1 year ago

Thanks for asking.

To answer your first question: Yes, you need to preprocess the dataset. Here are detailed preprocessing instructions for the dataset preprocessing. You do not need to use nni for it to work though. Here are the best hyperparameters we used:

hidden_dim=16
n_GNN_layers=2
n_MLP_layers=1
lr=0.001
num_heads=1
weight_decay=1e-5
inital_epochs=100
explainer_epochs=100
tuning_epochs=100
node_features=adj
pooling=sum
explain=True

If you wish to use nni to further tune the model, here is the search space file we used: search_space.json.zip

Thank you for identifying the problem with the NNI implementation. When we initially tested it, all input parameters from NNI were strings, so values such as "lr": "0.001" were correctly interpreted as floats. However, it appears that the implementation has since been modified, and now the parameters passed to the function are of their own type (i.e., "lr": 0.001 instead of "lr": "0.001"), which has led to this issue. I will update the code to ensure compatibility with the latest version of NNI.