Open Masterythepua opened 1 week ago
It seems that your GPU memory is limited. You can try modifying line 217 in eval.py by changing split_input = cat_input.split(8)
to split_input = cat_input.split(1)
. Please note that this adjustment will reduce the inference speed.
Later, I’ll add this as an argument, making the code more adaptable for smaller GPUs.
If you found our repository useful, please consider giving it a star. Thank you!
Thank you! It worked. I am impressed by the results! My GPU has 6Gb memory. Moreover the same error is appeared when the input image is bigger (e.g. 2500x2500 or 1000x1000) than the datasets size (600x400) so an adaptation for smaller GPUs would be great!
Finally when I tested an image with random dimensions 1238x758 the following error appeared
eval.py", line 216, in <module> cat_input = torch.cat([input_expended, one_pred_conds], dim=1) RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 1344 but got size 1280 for tensor number 1 in the list.
I have just tested an image with dimensions 1238x758 and did not encounter the issue. Please ensure that your eval.py file is updated to the latest version. For higher-resolution images, a GPU with at least 12GB of memory may be required.
I think, I used an older version. Thank you for mentioning it!
Hello congratulations for your work! I have tried to use the provided pre trained model using the first command in Low-Light Image Enhancement section.
python3 Enhancement/eval.py --opt experiments/CG_UNet_LOLv1/CG_UNet_LOLv1.yml --weights experiments/CG_UNet_LOLv1/ckpt.pth \ --cond_opt /experiments/IE_UNet_LOLv1/IE_UNet_LOLv1.yml --cond_weights experiments/IE_UNet_LOLv1/ckpt.pth \ --lpips --dataset LOLv1
I changed the path in lines 23, 24,57,58 of CG_UNet_LOLv1.yml and lines 22, 23,59,60 of IE_UNet_LOLv1.yml. However I am getting this error
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.37 GiB (GPU 0; 6.00 GiB total capacity; 11.13 GiB already allocated; 0 bytes free; 11.16 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I have tried to set num_worker_per_gpu:1, batch_size_per_gpu:1, max_minibatch:1, mini_batch_sizes: [1] and several combination of these variables but the error still arises.