xinntao / EDVR

Winning Solution in NTIRE19 Challenges on Video Restoration and Enhancement (CVPR19 Workshops) - Video Restoration with Enhanced Deformable Convolutional Networks. EDVR has been merged into BasicSR and this repo is a mirror of BasicSR.
https://github.com/xinntao/BasicSR
1.48k stars 320 forks source link

Limit on number of images in a dataset? #97

Closed sjscotti closed 4 years ago

sjscotti commented 4 years ago

Hi! I am able to get my modified test.py script to convert my own images (data_mode = 'blur_comp') if I pad them up to 1280x720 (see my question under #48 about a problem using smaller images), but find that I can only have between 300 and 400 images in an input folder. Is there a way for the code to handle an increased the number of images that can be converted? Alternately, if I divided up my images into groups of about 300 images per folder and then had the script convert these folders separately, would that work? If so, would I need to have the last images in the first folder be 300.png, 301.png, 302.png; and the first images in the second folder be 299.png, 300.png, 301.png so that there would be no temporal jump between the converted 300.png from the first folder, and converted 301.png from the second folder?

xinntao commented 4 years ago

I wonder why there is a limit. "I can only have between 300 and 400 images in an input folder"? During testing, you can have arbitrary images of your own datasets.

Did you mean that you do not have enough memory to load all the images? If so, just read from folders for each frame.

sjscotti commented 4 years ago

Hi! The problem is as you said, not enough memory. I didn't realize that the program reads in an entire folder of images at 1 time, so putting fewer in a single folder fixed it. I'll close this out now. Thanks!