zhengchen1999 / DAT

PyTorch code for our ICCV 2023 paper "Dual Aggregation Transformer for Image Super-Resolution"
Apache License 2.0
386 stars 37 forks source link

Is there a singleImageDataset type ? #2

Closed Asubayo closed 1 year ago

Asubayo commented 1 year ago

Hello, First of all, thanks for sharing your amazing work.

I am wondering if there is another dataset type available. Somehting like SingleImageDataset ? What if I want to give in input a single low resolution image ?

Currently I modified some *.yml but I have to specified two folders : dataroot_lq and dataroot_gt . I don't have a high resolution of my image. Is it a requirement ?

zhengchen1999 commented 1 year ago

Hi. Thanks for your interest in our work. We will add code to accept the single LR input as soon as possible.

zhengchen1999 commented 1 year ago

Dear Asubayo,

We have added code to test single LR input, and modified the README. You can test your dataset (without HQ) with corresponding commands: here.

If you have any other problem, please let us know. Thanks.

Asubayo commented 1 year ago

Dear Asubayo,

We have added code to test single LR input, and modified the README. You can test your dataset (without HQ) with corresponding commands: here.

If you have any other problem, please let us know. Thanks.

Thank you very much !! I am running into some cuda out of memory issues when testing. I only kept one of your example of LR image in datasets/single and before running test.py, my memory usage was at only 7% (according to nvidia-smi)

Here is the log I get:

2023-08-11 11:12:55,704 INFO: Loading DAT model from experiments/pretrained_models/DAT/DAT_x2.pth, with param key: [params].

2023-08-11 11:12:56,551 INFO: Model [DATModle] is created.

2023-08-11 11:12:56,551 INFO: Testing Single...

Traceback (most recent call last): File "basicsr/test.py", line 48, in test_pipeline(root_path) File "basicsr/test.py", line 43, in test_pipeline model.validation(test_loader, current_iter=opt['name'], tb_logger=None, save_img=opt['val']['save_img']) File "s:\ai_upscalers\dat\basicsr\models\base_model.py", line 48, in validation self.nondist_validation(dataloader, current_iter, tb_logger, save_img) File "s:\ai_upscalers\dat\basicsr\models\sr_model.py", line 166, in nondist_validation self.test() File "s:\ai_upscalers\dat\basicsr\models\dat_model.py", line 21, in test self.output = self.net_g(self.lq) File "C:\Users\xxx\AppData\Local\anaconda3\envs\DAT\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, kwargs) File "s:\ai_upscalers\dat\basicsr\archs\dat_arch.py", line 851, in forward x = self.conv_after_body(self.forward_features(x)) + x File "s:\ai_upscalers\dat\basicsr\archs\dat_arch.py", line 835, in forward_features x = layer(x, x_size) File "C:\Users\xxx\AppData\Local\anaconda3\envs\DAT\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, *kwargs) File "s:\ai_upscalers\dat\basicsr\archs\dat_arch.py", line 647, in forward x = blk(x, x_size) File "C:\Users\xxx\AppData\Local\anaconda3\envs\DAT\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(input, kwargs) File "s:\ai_upscalers\dat\basicsr\archs\dat_arch.py", line 563, in forward x = x + self.drop_path(self.attn(self.norm1(x), H, W)) File "C:\Users\xxx\AppData\Local\anaconda3\envs\DAT\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, *kwargs) File "s:\ai_upscalers\dat\basicsr\archs\dat_arch.py", line 413, in forward x1 = self.attns[0](qkv[:,:,:,:C//2], _H, _W)[:, :H, :W, :].reshape(B, L, C//2) File "C:\Users\xxx\AppData\Local\anaconda3\envs\DAT\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(input, **kwargs) File "s:\ai_upscalers\dat\basicsr\archs\dat_arch.py", line 227, in forward attn = attn + relative_position_bias.unsqueeze(0) RuntimeError: CUDA out of memory. Tried to allocate 1.17 GiB (GPU 0; 6.00 GiB total capacity; 3.66 GiB already allocated; 5.12 MiB free; 4.09 GiB reserved in total by PyTorch)

Is there a way to reduce batch size or anything else ? I saw that there is already an attempt to empty cuda cache.

zhengchen1999 commented 1 year ago

I can run the test on the 3090 (24G) GPU. I think the 7% memory usage may only be temporary, and for a moment, the GPU memory exceeds the limit and causes an error.

For insufficient memory: You can try smaller models like DAT-S, DAT-light. (Take test_single_x2.yml as an example. If you want to change the model to DAT-light, you need to modify L17-29 and L33. You can refer to test_DAT_light_x2.yml: L60-78.) Or use chop (Note 2: use_chop: True in YML, L40) to chop the input image.

Asubayo commented 1 year ago

Thanks for the tip, I will give a try to smaller models then. Could you share your results for the 3 test images in datasets/single I got a result for the bird only but not sure if I got it correctly.

zhengchen1999 commented 1 year ago

I provide x2 results (DAT, DAT-2, DAT, DAT-S, DAT-light) at Google Drive. You can refer to it.

zhengchen1999 commented 1 year ago

For the convenience of testing, I make the test image size in datasets/single smaller. You can update it and test again. Meanwhile, I update the results (and add x3, x4 results) in Google Drive.

Asubayo commented 1 year ago

Thank you very much. I successfully generated results from those smaller images.