researchmm / TTSR

[CVPR'20] TTSR: Learning Texture Transformer Network for Image Super-Resolution
MIT License
765 stars 115 forks source link

About test on Urban100 and Manga109 #18

Closed scutlrr closed 3 years ago

scutlrr commented 3 years ago

When test on these two datasets, since they don't have a reference image, whether the [ref=lr_sr and ref_sr=lr_sr] or [ref=HR and ref_sr=lr_sr]?

scutlrr commented 3 years ago

In addition, during the testing phase, how big is your CUDA memory?

FuzhiYang commented 3 years ago

For Urban100 and Manga109 which lacks of ref, ref is just the lr itself, which means ref_sr = lr↓↑.

When evaluating on CUFED5 dataset with one ref, 11G is enough. When evaluating on CUFED5 dataset with all ref together, CUDA memory is about 24G. If this is a problem on your device. Maybe you can try "checkpoint" or some useful tools to reduce the CUDA memory.

scutlrr commented 3 years ago

For Urban100 and Manga109 which lacks of ref, ref is just the lr itself, which means ref_sr = lr↓↑.

When evaluating on CUFED5 dataset with one ref, 11G is enough. When evaluating on CUFED5 dataset with all ref together, CUDA memory is about 24G. If this is a problem on your device. Maybe you can try "checkpoint" or some useful tools to reduce the CUDA memory.

Thank you very much for your explanation. I did not set ref and ref_sr correctly, which caused CUDA out of memory.

zhuxyme commented 3 years ago

I have the same problem, when I test Sun80 and Urban100 on one 2080Ti(11G), it out of memory, so I plan to test in parallel on multiple GPUs. But I have tried many times, it still cannot test on multiple GPUs. Can you tell me how you solved this problem

FuzhiYang commented 3 years ago

I have the same problem, when I test Sun80 and Urban100 on one 2080Ti(11G), it out of memory, so I plan to test in parallel on multiple GPUs. But I have tried many times, it still cannot test on multiple GPUs. Can you tell me how you solved this problem

  1. You may try "checkpoint" package in PyTorch to release the redundant CUDA memory immediately.
  2. https://github.com/researchmm/TTSR/blob/a3c618d011ef40b0f83004bf9bdbd545e1735ca7/model/SearchTransfer.py#L32. This line uses the most CUDA memory. You can divide such batch matrix multiplication operation grid by grid. This may save some CUDA memory.
scutlrr commented 3 years ago

I have the same problem, when I test Sun80 and Urban100 on one 2080Ti(11G), it out of memory, so I plan to test in parallel on multiple GPUs. But I have tried many times, it still cannot test on multiple GPUs. Can you tell me how you solved this problem

If you modify the code, 11G maybe not enough. You can rent a 16G server for testing.

scutlrr commented 3 years ago

In your paper, It mentioned:

For Urban100, we use the same settingas [41] to regard its LR images as the reference images. Such a design enables an explicit process of self-similar searching and transferring since Urban100 are all building images with strong self-similarity. For Manga109 which also lacks the reference images, we randomly sample HR images in this dataset as the reference images.

Is the ref used by manga109 not LR?

FuzhiYang commented 3 years ago

Refs in Manga109 are not LR, just randomly choose another HR as Ref

scutlrr commented 3 years ago

I also randomly selected HR and cropped half of the size as the ref, because if I use a whole HR as the ref, the PSNR of my model reaches 35.+ db. But the test indicators have not grown much on other data sets. In addition, I would like to ask if Sun80 has 20 reference images? The Sun-Hay 80 I downloaded from the Internet has only seven reference images.

FuzhiYang commented 3 years ago

I'm sorry I forgot where to download Sun80 dataset and the exact number of reference images for each image. You can just randomly choose one Ref during evaluation and the performance will not variant too much.

scutlrr commented 3 years ago

I'm sorry I forgot where to download Sun80 dataset and the exact number of reference images for each image. You can just randomly choose one Ref during evaluation and the performance will not variant too much.

thx!

clttyou commented 3 years ago

I have the same problem, when I test Sun80 and Urban100 on one 2080Ti(11G), it out of memory, so I plan to test in parallel on multiple GPUs. But I have tried many times, it still cannot test on multiple GPUs. Can you tell me how you solved this problem

If you modify the code, 11G maybe not enough. You can rent a 16G server for testing. How do you modify it? Can you share the core code? Thank you