google-research / smore

Apache License 2.0
162 stars 28 forks source link

Wikikgv2 Model Training is Hanging #11

Closed HarryShomer closed 1 year ago

HarryShomer commented 1 year ago

Hi,

I'm having an issue where the model gets stuck while training. It typically happens early in the first epoch. Below is an example when running the smore/training/vec_scripts/train_shallow_wikikgv2.sh (unmodified sans the GPUs) on 4 NVIDIA RTX A6000 50GB GPUs.

model_stuck

It hangs forever unless I stop it with a keyboard interrupt. Doing so yields the following traceback (I only post a portion because it's very long and repetitive).

model_traceback

It seems like something is happening in the multiprocessing as it's hanging when sharing messages between processes.

Any help would be appreciated! @hyren

Thanks, Harry

hyren commented 1 year ago

Hi, this does not seem to be related to the smore repo. Here is an instruction I found, can you please check? https://www.rdmamojo.com/2012/05/18/libibverbs/

HarryShomer commented 1 year ago

That doesn't seem to be the issue.

I tried seting up libibverbs, and while the warnings went away, the code is still hanging.

smore_hang

hyren commented 1 year ago

Hi, can you say more about the details of the environment?

HarryShomer commented 1 year ago

Sure. Here's some basic info. Let me know if you would like anything else.

OS: Ubuntu 20.04.5 LTS
CUDA: 11.6.124
GPU(s): NVIDIA RTX A6000
Python: 3.9.12
PyTorch: 1.12.1
Hanjun-Dai commented 1 year ago

Hi there, sorry for the inconvenience, could you please quickly try with only 1 gpu, and see if it still hangs? We are trying to figure out whether it is due to gpu-gpu communication, or the c++ sampler we used.

HarryShomer commented 1 year ago

Sorry, I should have been clearer in my initial comment. The code doesn't hang when running with just one GPU. This only occurs when using multiple GPUs.

Juanhui28 commented 1 year ago

We tried one gpu again, actually it still hangs. Sorry for the inconvenience.

Hanjun-Dai commented 1 year ago

Hi there,

Could you please kindly pull the latest code in the wikikgv2 branch, and add --train_async_rw=False to your script and try again? Since I'm unable to reproduce your issue, I'd like to see whether this would temporarily resolve the issue.

Juanhui28 commented 1 year ago

Hi,

We tried your suggestion and now the training doesn't hang! But we got the similar issue in the evaluation. And we noticed your suggestion in another issue and follow the suggestion, but it still hangs.

I really appreciate your help!

hyren commented 1 year ago

hi, what's the script you used?

Juanhui28 commented 1 year ago

Hi, train_shallow_wikikgv2.sh in the training/vec_scripts folder.

hyren commented 1 year ago

Just to make sure, you used the scripts in train_shallow_wikikgv2.sh and add the --train_async_rw=False flag?

Juanhui28 commented 1 year ago

Actually no, since if we add it we got the unrecognized arguments error.

hyren commented 1 year ago

have you pulled the most recent commits?

Juanhui28 commented 1 year ago

Sorry we made some mistakes when we merged the code. Now we pulled the the latest code and it works for both training and evaluation!

Thank you so much!