diptamath / Nonhomogeneous_Image_Dehazing

Fast Deep Multi-patch Hierarchical Network for Nonhomogeneous Image Dehazing
MIT License
34 stars 7 forks source link

Problem with CUDA Memory needed #4

Open AMBBe opened 3 years ago

AMBBe commented 3 years ago

I try again and again on google colab for running this code with 15 GB GPU Memory but I can't successfully I try it with reduce batch size and use below code but not work

import gc gc.collect() torch.cuda.empty_cache()

and I get this error after run it:

init data folders load encoder_lv1 success load encoder_lv2 success load encoder_lv3 success load encoder_lv1 success load decoder_lv2 success load decoder_lv3 success /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py:154: UserWarning: The epoch parameter in scheduler.step() was not necessary and is being deprecated where possible. Please use scheduler.step() to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose. warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning) Training... Traceback (most recent call last): File "DMPHN_train.py", line 262, in main() File "DMPHN_train.py", line 157, in main feature_lv3_3 = encoder_lv3(images_lv3_3) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/content/drive/My Drive/Nonhomogeneous_Image_Dehazing-master/models.py", line 48, in forward x = self.layer2(x) + x RuntimeError: CUDA out of memory. Tried to allocate 470.00 MiB (GPU 0; 14.76 GiB total capacity; 13.60 GiB already allocated; 23.75 MiB free; 13.68 GiB reserved in total by PyTorch)

Please help me

AMBBe commented 3 years ago

I have this error after run DMPHN_train.py

saikatdutta commented 3 years ago

What is the lowest batch size that you've tried with?

AMBBe commented 3 years ago

4 and 2 for batch size

AMBBe commented 3 years ago

Do you run DMPHN_train.py on colab without this error? I try it with 2,4,8,16 batch size but it's nit work Plz help me

AMBBe commented 3 years ago

I use NH-HAZE dataset for train. Do you use it or another dataset?

saikatdutta commented 3 years ago

Training is done on NH-Haze 2020. Please close this issue if resolved.

shimaamohammed121 commented 3 years ago

I have the same error when i run DMPHN_train.py on colab .can you help me solve this error?

AMBBe commented 3 years ago

You can run the code without custom_loss function and use mse for it, this code is comment in the main file and use batch size less than 32. You should run code without test section.

On Sun, Sep 12, 2021, 1:30 AM shimaamohammed121 @.***> wrote:

I have the same error when i run DMPHN_train.py on colab .can you help me solve this error?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/diptamath/Nonhomogeneous_Image_Dehazing/issues/4#issuecomment-917479529, or unsubscribe https://github.com/notifications/unsubscribe-auth/AU34TK35HMD5GCYNWFHK55DUBO7NXANCNFSM5ATKQQSQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

shimaamohammed121 commented 3 years ago

I do this modification and I have the same error and I also run the code on google colab bro @AMBBe

Aomrlyliliya commented 2 years ago

Do you solve this problem?I have the same problem as you.

AMBBe commented 2 years ago

Hi Yes, I can solve this problem by using google colab for run the code

On Tue, Apr 12, 2022, 13:14 Aomrlyliliya @.***> wrote:

Do you solve this problem?I have the same problem as you.

— Reply to this email directly, view it on GitHub https://github.com/diptamath/Nonhomogeneous_Image_Dehazing/issues/4#issuecomment-1096377426, or unsubscribe https://github.com/notifications/unsubscribe-auth/AU34TK2NAUO3BWUF5GKHESLVEUZXVANCNFSM5ATKQQSQ . You are receiving this because you were mentioned.Message ID: @.***>

Aomrlyliliya commented 2 years ago

Thank you for your review. I run the code in google colab but I got the same error of cuda out of memory. I also run the code without custom_loss function and use mse for it, and use batch size less than 32. And I run code without test section as you said.But it doesn't work.Can you tell me your settings?