pkuzqh / Recoder

MIT License
52 stars 11 forks source link

RuntimeError: CUDA out of memory #11

Open happygirlzt opened 2 years ago

happygirlzt commented 2 years ago

Hi there, thank you very much for open-sourcing the work! I wonder what devices you used for the work. Since I tried to run the training in a machine with 8 Tesla V100-SXM2-16GB, but cannot make it. Besides, I found the code would only utilize 2 GPUs, although I did not specify. I modified the device setting inside run.py, but still cannot change the fact that only 2 GPUs are used. Please kindly suggest. Thank you in advance!

Screenshot 2022-08-01 at 15 36 30
happygirlzt commented 2 years ago

Hi @pkuzqh , I've got another issue when running the code.

Screenshot 2022-08-02 at 21 41 50
pkuzqh commented 2 years ago

If you want to change the batch size, you need to change the number in the dict "args". If you want to use multiple GPUs, you need to modify "model = nn.DataParallel(model, device_ids=[0, 1])".

happygirlzt commented 2 years ago

Hi @pkuzqh, thank you for the reply. The cuda out of memory issue has been resolved. However, I found the new error above. Please kindly suggest, thanks.

pkuzqh commented 2 years ago

How many GPUs do you use? And the batch size?

happygirlzt commented 2 years ago

3, I indicated in the train() that: device_ids=[1,2,3] the batch size is 16

pkuzqh commented 2 years ago

You need to change the number "4" in line 103-106 to a multiple of 3. And the batch size needs to be a multiple of 3.

happygirlzt commented 2 years ago

OK, thank you very much @pkuzqh ! It can now run. However, I saw in the train(), the number of epochs is 100000, for epoch in range(100000): is that true?

happygirlzt commented 2 years ago

BTW, for inference, it looks like the testDefect4j.py can only use 1 GPU? Since I have 4 GPUs, only one was used, and it caused an OOM issue.

Screenshot 2022-08-08 at 12 40 19
pkuzqh commented 2 years ago

you can use "nn.DataParallel" to use multiple gpus in testDefect4J.py