jhljx / CTGCN

CTGCN: k-core based Temporal Graph Convolutional Network for Dynamic Graphs (accepted by IEEE TKDE in 2020) https://ieeexplore.ieee.org/document/9240056
MIT License
96 stars 28 forks source link

Missing /CTGCN/data/uci/CTGCN/ctgcn_cores #3

Closed Nick-Kou closed 3 years ago

Nick-Kou commented 3 years ago

Hi,

After attempting to run the Graph Embedding example:python3 main.py --config=config/uci.json --task=embedding --method=CTGCN-C, I receive an error stating that there is no such file or directory leading to the path: /CTGCN/data/uci/CTGCN/ctgcn_cores. I noticed it is optional in the uci.json config file, so after removing it I get yet another error stating: TypeError: expected str, bytes or os.PathLike object, not NoneType from line 57 in the helper.py file.

I was wondering if the folder: ctgcn_cores is missing from the repository, and if there is another way around this issue.

Thanks

jhljx commented 3 years ago

You can first run python3 main.py --config=config/uci.json --task=preprocessing --method=CTGCN-C to generate the ctgcn_cores directory and the k-core adjacent matrices. Then you can use '--task=embedding' to generate node embeddings of ctgcn.

Hope this helps you!

Nick-Kou commented 3 years ago

Thank you! My last question is how do you think the CTGCN-S or CTGCN-C models will perform in terms of learning node embeddings in dynamic time-evolving attributed graphs with features? Do you have any suggestions for the hyper parameters? Lastly, how would the feature files be incorporated?

jhljx commented 3 years ago

CTGCN-C might focus on local connective features, while CTGCN-S focus on structural similarity features. You can test both methods on specific tasks.

I test the hyper parameters of CTGCN method in the paper, and I found that this model is robust. So if the memory is limited, you can reduce the k-core number(this is a hyper-parameter) but won't harm the performance. You can also review the parameter sensitivity results in our paper.

If you want to incorporate your own feature files, you need to modify the 'train.py' and 'helper.py'. In 'train.py' file, I have set a parameter called 'nfeature_folder', you can save all node features at each timestamp in this folder. Then node features will be read from this folder. This folder parameter can be added in the configuration files.

After the above step, you can read node feature through 'get_feature_list' function in the 'helper.py' file. I haven't tested this before, just leave this function for further usage. So you can test it for your own purpose.

Nick-Kou commented 3 years ago

Thank you so much. I really appreciate the assistance and suggestions.