motional / nuplan-devkit

The devkit of the nuPlan dataset.
https://www.nuplan.org
Other
662 stars 126 forks source link

The process get stuck when running in Two Nodes GPU claster #318

Closed motianxiuhua closed 1 year ago

motianxiuhua commented 1 year ago

I try to ran program inTwo Nodes GPU claster, but the process got stuck in the begining of Epoch as follows:

Global seed set to 0
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/4
2023-05-25 12:59:01,172 INFO {/home/user002/miniconda3/envs/nuplan/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py:194}Added key: store_based_barrier_key:1 to store for rank: 1
2023-05-25 12:59:01,177 INFO {/home/user002/miniconda3/envs/nuplan/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py:194}Added key: store_based_barrier_key:1 to store for rank: 0
2023-05-25 12:59:01,183 INFO {/home/user002/miniconda3/envs/nuplan/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py:224}Rank 0: Completed store-based barrier for 4 nodes.
2023-05-25 12:59:01,186 INFO {/home/user002/miniconda3/envs/nuplan/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py:224}Rank 1: Completed store-based barrier for 4 nodes.
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All DDP processes registered. Starting ddp with 4 processes
----------------------------------------------------------------------------------------------------

user-X299-UD4-Pro:3985506:3985506 [0] NCCL INFO Bootstrap : Using [0]enp0s31f6:114.214.170.120<0>
user-X299-UD4-Pro:3985506:3985506 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
user-X299-UD4-Pro:3985506:3985506 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
user-X299-UD4-Pro:3985506:3985506 [0] NCCL INFO NET/Socket : Using [0]enp0s31f6:114.214.170.120<0>
user-X299-UD4-Pro:3985506:3985506 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda11.1
user-X299-UD4-Pro:3993535:3993535 [1] NCCL INFO Bootstrap : Using [0]enp0s31f6:114.214.170.120<0>
user-X299-UD4-Pro:3993535:3993535 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
user-X299-UD4-Pro:3993535:3993535 [1] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
user-X299-UD4-Pro:3993535:3993535 [1] NCCL INFO NET/Socket : Using [0]enp0s31f6:114.214.170.120<0>
user-X299-UD4-Pro:3993535:3993535 [1] NCCL INFO Using network Socket
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Channel 00/02 :    0   1   2   3
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Channel 01/02 :    0   1   2   3
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO Trees [0] 2/-1/-1->1->0|0->1->2/-1/-1 [1] -1/-1/-1->1->0|0->1->-1/-1/-1
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO Setting affinity for GPU 1 to 0fffffff
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->3|3->0->1/-1/-1
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Setting affinity for GPU 0 to 0fffffff
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO Could not enable P2P between dev 1(=65000) and dev 0(=17000)
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Channel 00 : 3[65000] -> 0[17000] [receive] via NET/Socket/0
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Could not enable P2P between dev 0(=17000) and dev 1(=65000)
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Channel 00 : 0[17000] -> 1[65000] via direct shared memory
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Could not enable P2P between dev 0(=17000) and dev 1(=65000)
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO Channel 00 : 1[65000] -> 2[17000] [send] via NET/Socket/0
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO Channel 00 : 2[17000] -> 1[65000] [receive] via NET/Socket/0
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO Could not enable P2P between dev 1(=65000) and dev 0(=17000)
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO Channel 00 : 1[65000] -> 0[17000] via direct shared memory
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Channel 01 : 3[65000] -> 0[17000] [receive] via NET/Socket/0
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Could not enable P2P between dev 0(=17000) and dev 1(=65000)
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Channel 01 : 0[17000] -> 1[65000] via direct shared memory
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO Could not enable P2P between dev 1(=65000) and dev 0(=17000)
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Could not enable P2P between dev 0(=17000) and dev 1(=65000)
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO Channel 01 : 1[65000] -> 2[17000] [send] via NET/Socket/0
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO Could not enable P2P between dev 1(=65000) and dev 0(=17000)
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO Channel 01 : 1[65000] -> 0[17000] via direct shared memory
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
user-X299-UD4-Pro:3993535:4024417 [1] NCCL INFO comm 0x7f127000db60 rank 1 nranks 4 cudaDev 1 busId 65000 - Init COMPLETE
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO Channel 01 : 0[17000] -> 3[65000] [send] via NET/Socket/0
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
user-X299-UD4-Pro:3985506:4024404 [0] NCCL INFO comm 0x7f8930012c90 rank 0 nranks 4 cudaDev 0 busId 17000 - Init COMPLETE
user-X299-UD4-Pro:3985506:3985506 [0] NCCL INFO Launch mode Parallel
2023-05-25 12:59:04,500 INFO {/home/user002/code/nuplan-devkit-v1.1-ours/nuplan/planning/training/data_loader/datamodule.py:47}Number of samples in train set: 662023-05-25 12:59:04,499 INFO {/home/user002/code/nuplan-devkit-v1.1-ours/nuplan/planning/training/data_loader/datamodule.py:47}Number of samples in train set: 66

2023-05-25 12:59:04,501 INFO {/home/user002/code/nuplan-devkit-v1.1-ours/nuplan/planning/training/data_loader/datamodule.py:47}Number of samples in validation set: 17
2023-05-25 12:59:04,502 INFO {/home/user002/code/nuplan-devkit-v1.1-ours/nuplan/planning/training/data_loader/datamodule.py:47}Number of samples in validation set: 17
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1]
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
2023-05-25 12:59:04,535 INFO {/home/user002/code/nuplan-devkit-v1.1-ours/nuplan/planning/training/modeling/lightning_module_wrapper.py:196}Using optimizer: torch.optim.Adam
2023-05-25 12:59:04,535 INFO {/home/user002/code/nuplan-devkit-v1.1-ours/nuplan/planning/training/modeling/lightning_module_wrapper.py:196}Using optimizer: torch.optim.Adam
2023-05-25 12:59:04,536 INFO {/home/user002/code/nuplan-devkit-v1.1-ours/nuplan/planning/script/builders/lr_scheduler_builder.py:52}Not using any lr_schedulers.
2023-05-25 12:59:04,537 INFO {/home/user002/code/nuplan-devkit-v1.1-ours/nuplan/planning/script/builders/lr_scheduler_builder.py:52}Not using any lr_schedulers.

  | Name  | Type    | Params
----------------------------------
0 | model | LaneGCN | 2.0 M 
----------------------------------
2.0 M     Trainable params
0         Non-trainable params
2.0 M     Total params
8.196     Total estimated model params size (MB)
/home/user002/miniconda3/envs/nuplan/lib/python3.9/site-packages/pytorch_lightning/callbacks/lr_monitor.py:97: RuntimeWarning: You are using `LearningRateMonitor` callback with models that have no learning rate schedulers. Please see documentation for `configure_optimizers` method.
  rank_zero_warn(
Epoch 0:   0%|                                                                                           | 0/10 [00:00<?, ?it/s]
Epoch 0:  10%|██████▌                                                          | 1/10 [00:56<08:31, 56.88s/it, loss=210, v_num=]

In the Master node, the enviroment varible are configured as follows:

export NUPLAN_DATA_ROOT="/home/user001/Code/nuplan-devkit-v1.1-ours/nuplan/dataset"
export NUPLAN_MAPS_ROOT="/home/user001/Code/nuplan-devkit-v1.1-ours/nuplan/dataset/maps"
export NUPLAN_EXP_ROOT="/home/user002/code/nuplan-devkit-v1.1-ours/nuplan/exp"
export NUPLAN_DB_FILES="/home/user001/Code/nuplan-devkit-v1.1-ours/nuplan/dataset/nuplan-v1.1/mini"
export NUPLAN_MAP_VERSION="nuplan-maps-v1.0"
export ip_head="114.214.170.120"
export redis_password=""
export num_nodes="2"
export MASTER_ADDR="114.214.170.120"
export MASTER_PORT="1234"
export NODE_RANK="0"
export NUM_NODES="2"
export CUDA_VISIBLE_DEVICES="0,1"
export NCCL_DEBUG=INFO
export NCCL_SOCKET_IFNAME=enp
export NCCL_IB_DISABLE=1

In Salve node, the enviroment varible are configured as follows:

export NUPLAN_DATA_ROOT="/home/user001/Code/nuplan-devkit-v1.1-ours/nuplan/dataset"
export NUPLAN_MAPS_ROOT="/home/user001/Code/nuplan-devkit-v1.1-ours/nuplan/dataset/maps"
export NUPLAN_EXP_ROOT="/home/user002/code/nuplan-devkit-nuplan-devkit-v1.1/nuplan/exp"
export NUPLAN_DB_FILES="/home/user001/Code/nuplan-devkit-v1.1-ours/nuplan/dataset/nuplan-v1.1/mini"
export NUPLAN_MAP_VERSION="nuplan-maps-v1.0"
export MASTER_ADDR="114.214.170.120"
export MASTER_PORT="1234"
export NODE_RANK="1"
export NUM_NODES="2"
export CUDA_VISIBLE_DEVICES="0,1"
export NCCL_DEBUG=INFO
export NCCL_DEBUG_SUBSYS=ALL
export NCCL_SOCKET_IFNAME=enp

Both two nodes change two parameters of yaml file default_lightning.yaml :

gpus: 2
num_nodes: 2 

And the salve node need to change another parameter of yaml file ray_distributed.yaml : master_node_ip: "114.214.170.120"

Before running commands, we need to start ray in both two nodes, and then run program in both two node as follows:

python nuplan/planning/script/run_training.py \
    py_func=train \
    +training=training_vector_model \
    scenario_builder=nuplan_mini \
    scenario_filter.limit_total_scenarios=200 \
    lightning.trainer.params.max_epochs=1 \
    data_loader.params.batch_size=2 \
    data_loader.params.num_workers=4 

I am so sorry to bother you, but I really need your help, thank you very much!

patk-motional commented 1 year ago

Hi @motianxiuhua,

Please refer to this https://github.com/motional/nuplan-devkit/issues/253#issuecomment-1521335641 that I made in a similar issue. Let me know if you are still facing problems after.

motianxiuhua commented 1 year ago

thanks! I close ray, and it works! but slowly in loading scenarios

patk-motional commented 1 year ago

You can try a different worker. worker=single_machine_thread_pool