ZikangZhou / HiVT

[CVPR 2022] HiVT: Hierarchical Vector Transformer for Multi-Agent Motion Prediction
https://openaccess.thecvf.com/content/CVPR2022/papers/Zhou_HiVT_Hierarchical_Vector_Transformer_for_Multi-Agent_Motion_Prediction_CVPR_2022_paper.pdf
Apache License 2.0
601 stars 118 forks source link

GPU Memory-Usage goes higher during Trainning. #51

Open yuanryann opened 3 months ago

yuanryann commented 3 months ago

Hi Dr.Zhou, Firstly, Thank you very much for your excellent work!!!. I run trainning on my Nvidia-A4000 GPU,GPU Memory-Usage goes higher every epoch. Some hparams are changed: the train_batch_size and the val_batch_size set as 64. parallel :True,num_workers:6,pin_memory: false Epoch 0, the GPU Memory-Usage takes up 11000Mib, but Epoch 30, it takes up 15577Mib. Could Anyone help me deal with this issue. 图片

the hparams.yaml shown as below: historical_steps: 20 future_steps: 30 num_modes: 6 rotate: true node_dim: 2 edge_dim: 2 embed_dim: 64 num_heads: 8 dropout: 0.1 num_temporal_layers: 4 num_global_layers: 3 local_radius: 50 parallel: true lr: 0.0005 weight_decay: 0.0001 T_max: 64 root: /home/com0179/AI/Prediction/HiVT/datasets train_batch_size: 64 val_batch_size: 64 shuffle: true num_workers: 6 pin_memory: false persistent_workers: true gpus: 1 max_epochs: 64 monitor: val_minFDE save_top_k: 5

Joseph-Lee-V commented 1 month ago

Hello, I encountered the same issue :) Have you resolved it?