hpcaitech / ColossalAI-Examples

Examples of training models with hybrid parallelism using ColossalAI
Apache License 2.0
333 stars 102 forks source link

The error happened when I did multi-node distributed training #180

Open ShangWeiKuo opened 1 year ago

ShangWeiKuo commented 1 year ago

🐛 Describe the bug

Excuse me. When I enter the command "colossalai run --nproc_per_node 4 --host [host1 ip addr],[host2 ip addr] --master_addr [host1 ip addr] train.py", I got this message: Error: failed to run torchrun --nproc_per_node=4 --nnodes=2 --node_rank=1 --rdzv_backend=c10d --rdzv_endpoint=[host1 ip addr]:29500 --rdzv_id=colossalai-default-job train.py on [host2 ip addr]

What are the configurations I have to set in the train.py you provided with?

Environment

CUDA Version: 11.3 PyTorch Version: 1.12.0 CUDA Version in PyTorch Build: 11.3 PyTorch CUDA Version Match: ✓ CUDA Extension: ✓

FrankLeeeee commented 1 year ago

Hi, are these two machines connected by ssh without password?