BUPT-GAMMA / OpenHGNN

This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL.
Apache License 2.0
816 stars 139 forks source link

Segmentation fault on GTN with 3 layers #211

Closed yhchong closed 3 months ago

yhchong commented 6 months ago

🐛 Bug

To Reproduce

Steps to reproduce the behavior:

  1. Change num_layers in GTN to 3
  2. run python ./OpenHGNN/main.py -m GTN -d acm4GTN -t node_classification -g 0

Namespace(dataset='acm4GTN', gpu=0, load_from_pretrained=False, model='GTN', task='node_classification', use_best_config=False)

Basic setup of this experiment: model: GTN
dataset: acm4GTN
task: node_classification. This experiment has following parameters. You can use set_params to edit them. Use print(experiment) to print this information again.

adaptive_lr_flag: True dataset_name: acm4GTN device: cuda:0 gpu: 0 hidden_dim: 128 hpo_search_space: None hpo_trials: 100 identity: True load_from_pretrained: False lr: 0.005 max_epoch: 50 mini_batch_flag: False model_name: GTN norm_emd_flag: True num_channels: 2 num_layers: 3 optimizer: Adam out_dim: 16 output_dir: ./openhgnn/output/GTN patience: 10 seed: 0 use_best_config: False weight_decay: 0.001

10 Jan 00:43 INFO [Config Info] Model: GTN, Task: node_classification, Dataset: acm4GTN Done saving data into cached files. 10 Jan 00:43 INFO [NC Specific] Modify the out_dim with num_classes 10 Jan 00:43 INFO [Feature Transformation] Feat is 0, nothing to do! 0%| | 0/50 [00:00<?, ?it/s]Segmentation fault (core dumped)

Expected behavior

No segmentation fault.

Environment

hidden_dim = 128 out_dim = 16 num_channels = 2 num_layers = 3

seed = 0 max_epoch = 50 patience = 10

identity = True norm_emd_flag = True adaptive_lr_flag = True mini_batch_flag = False

Ying-1106 commented 3 months ago

We have run 3 layers of GTN on our own computer and it can run normally. We speculate that your segment fault may be caused by insufficient graphics memory. Please check the usage of graphics memory.Thanks for your usage of OpenHGNN。