The problem I am currently facing is that I am unable to train with multiple cards on a single machine. Due to the abandonment of torch. distributed. launch, I have attempted CUDA VISIBLE DIVICES=0,1,2,3 Python - m torch. distributed. run -- nnodes 1-- nproc Per Node 4 train.py -- config configs/demo. yaml; Torchrun train.py -- config configs/demo.yaml and other training commands cannot be trained, and there is no relevant log information output. Therefore, I would like to ask you for advice on how to solve this problem. Thank you very much and look forward to your reply. Thank you again.
Dear author, hello
Thank you very much for sharing these codes.
The problem I am currently facing is that I am unable to train with multiple cards on a single machine. Due to the abandonment of torch. distributed. launch, I have attempted CUDA VISIBLE DIVICES=0,1,2,3 Python - m torch. distributed. run -- nnodes 1-- nproc Per Node 4 train.py -- config configs/demo. yaml; Torchrun train.py -- config configs/demo.yaml and other training commands cannot be trained, and there is no relevant log information output. Therefore, I would like to ask you for advice on how to solve this problem. Thank you very much and look forward to your reply. Thank you again.