Open AMBBe opened 3 years ago
I have this error after run DMPHN_train.py
What is the lowest batch size that you've tried with?
4 and 2 for batch size
Do you run DMPHN_train.py on colab without this error? I try it with 2,4,8,16 batch size but it's nit work Plz help me
I use NH-HAZE dataset for train. Do you use it or another dataset?
Training is done on NH-Haze 2020. Please close this issue if resolved.
I have the same error when i run DMPHN_train.py on colab .can you help me solve this error?
You can run the code without custom_loss function and use mse for it, this code is comment in the main file and use batch size less than 32. You should run code without test section.
On Sun, Sep 12, 2021, 1:30 AM shimaamohammed121 @.***> wrote:
I have the same error when i run DMPHN_train.py on colab .can you help me solve this error?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/diptamath/Nonhomogeneous_Image_Dehazing/issues/4#issuecomment-917479529, or unsubscribe https://github.com/notifications/unsubscribe-auth/AU34TK35HMD5GCYNWFHK55DUBO7NXANCNFSM5ATKQQSQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
I do this modification and I have the same error and I also run the code on google colab bro @AMBBe
Do you solve this problem?I have the same problem as you.
Hi Yes, I can solve this problem by using google colab for run the code
On Tue, Apr 12, 2022, 13:14 Aomrlyliliya @.***> wrote:
Do you solve this problem?I have the same problem as you.
— Reply to this email directly, view it on GitHub https://github.com/diptamath/Nonhomogeneous_Image_Dehazing/issues/4#issuecomment-1096377426, or unsubscribe https://github.com/notifications/unsubscribe-auth/AU34TK2NAUO3BWUF5GKHESLVEUZXVANCNFSM5ATKQQSQ . You are receiving this because you were mentioned.Message ID: @.***>
Thank you for your review. I run the code in google colab but I got the same error of cuda out of memory. I also run the code without custom_loss function and use mse for it, and use batch size less than 32. And I run code without test section as you said.But it doesn't work.Can you tell me your settings?
I try again and again on google colab for running this code with 15 GB GPU Memory but I can't successfully I try it with reduce batch size and use below code but not work
import gc gc.collect() torch.cuda.empty_cache()
and I get this error after run it:
init data folders load encoder_lv1 success load encoder_lv2 success load encoder_lv3 success load encoder_lv1 success load decoder_lv2 success load decoder_lv3 success /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of
main()
File "DMPHN_train.py", line 157, in main
feature_lv3_3 = encoder_lv3(images_lv3_3)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/content/drive/My Drive/Nonhomogeneous_Image_Dehazing-master/models.py", line 48, in forward
x = self.layer2(x) + x
RuntimeError: CUDA out of memory. Tried to allocate 470.00 MiB (GPU 0; 14.76 GiB total capacity; 13.60 GiB already allocated; 23.75 MiB free; 13.68 GiB reserved in total by PyTorch)
lr_scheduler.step()
beforeoptimizer.step()
. In PyTorch 1.1.0 and later, you should call them in the opposite order:optimizer.step()
beforelr_scheduler.step()
. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py:154: UserWarning: The epoch parameter inscheduler.step()
was not necessary and is being deprecated where possible. Please usescheduler.step()
to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose. warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning) Training... Traceback (most recent call last): File "DMPHN_train.py", line 262, inPlease help me