joslefaure / HIT

Official Implementation of our WACV2023 paper: “Holistic Interaction Transformer Network for Action Detection”
https://arxiv.org/abs/2210.12686
48 stars 7 forks source link

Distributed training problem #39

Open ddddqt opened 5 months ago

ddddqt commented 5 months ago

image Hello author, I am now trying to use two Gpus for distributed training, but I do not know why I have been staying here and did not start training, may I ask you to encounter such a situation?

joslefaure commented 5 months ago

I have not encountered this issue. If you're training JHMDB, one GPU is ok, it should not take long to train.