LoSealL / VideoSuperResolution

A collection of state-of-the-art video or single-image super-resolution architectures, reimplemented in tensorflow.
MIT License
1.61k stars 295 forks source link

Issue with prepare_data.py #13

Closed AIRedWood closed 5 years ago

AIRedWood commented 5 years ago

When I download 'VID4' and 'vespcn.rar' by 'python prepare_data.py'. what's wrong with this ? How to input '--noauth_local_webserver'?

(dl) root@fa9d5f7f49cd:/VSR# python prepare_data.py Do you wish to download DIV2K_train_HR? [y/N] n Do you wish to download DIV2K_valid_HR? [y/N] n Do you wish to download DIV2K_train_LR_unknown_X4? [y/N] n Do you wish to download DIV2K_valid_LR_unknown_X4? [y/N] n Do you wish to download SET5? [y/N] n Do you wish to download SET14? [y/N] n Do you wish to download SunHay80? [y/N] n Do you wish to download Urban100? [y/N] n Do you wish to download VID4? [y/N] y Do you wish to download BSD300? [y/N] n Do you wish to download BSD500? [y/N] n Do you wish to download 91image? [y/N] n Do you wish to download waterloo? [y/N] n Do you wish to download GOPRO_Large? [y/N] n Do you wish to download MCL-V? [y/N] n Do you wish to download srcnn.tar? [y/N] n Do you wish to download espcn.tar? [y/N] n Do you wish to download edsr? [y/N] n Do you wish to download dncnn? [y/N] n Do you wish to download carn? [y/N] n Do you wish to download srdensenet? [y/N] n Do you wish to download vdsr? [y/N] n Do you wish to download msrn? [y/N] n Do you wish to download vespcn? [y/N] y Downloading data from https://people.csail.mit.edu/celiu/CVPR2011/videoSR.zip 328499200/328495021 [==============================] - 67s 0us/step /usr/local/miniconda3/envs/dl/lib/python3.6/site-packages/oauth2client/_helpers.py:255: UserWarning: Cannot access /tmp/token.json: No such file or directory warnings.warn(_MISSING_FILE_MESSAGE.format(filename))

Your browser has been opened to visit:

https://accounts.google.com/o/oauth2/auth?client_id=478543842142-v29iqrpjg8oc54vabqbbtng19gto09b6.apps.googleusercontent.com&redirect_uri=http%3A%2F%2Flocalhost%3A8080%2F&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.metadata.readonly&access_type=offline&response_type=code

If your browser is on a different machine then exit and re-run this application with the command-line parameter

--noauth_local_webserver

LoSealL commented 5 years ago

@AIRedWood VID4 seems to be successfully fetched. vespcn.zip is at https://drive.google.com/open?id=19u4YpsyThxW5dv4fhpMj7c5gZeEDKthm

Are you using a remote PC? If not, the local browser will open up and show Google OAuth interface, you need to permit this script to download shared file. If you a using remote PC, you need to visit the url above and download it yourself.

AIRedWood commented 5 years ago

@LoSealL Yes, I'm using a remote PC, and I have another question. where should i put the .ckpt file of vespcn?

AIRedWood commented 5 years ago

@LoSealL thank you very much for your answer

LoSealL commented 5 years ago

@AIRedWood .ckpt files are default unzip to ../Results/vespcn/save. In your evaluation, python run.py --model=vespcn --save_dir=../Results/ the --save_dir shall point to the parent folder of the model folder (which is the same as model name).

For the remote PC user, I will consider your request and support --noauth_local_webserver in the future update.

AIRedWood commented 5 years ago

@LoSealL when I input python run.py --model=vespcn --save_dir=../Results/ INFO:tensorflow:Restoring parameters from ../Results/vespcn/save/vespcn-sc4-ep1000.ckpt WARNING:tensorflow:frames is empty. [size=-1] Test: 0it [00:00, ?it/s]

WARNING:tensorflow:frames is empty. [size=-1]

I see your said the solution in question 12 Please organize your data folder like this: |original |--video1 |----frame01.png |----frame02.png |----.... |--video2 |----frame01.png |----frame02.png |--...

should i organize the /mnt/data/datasets/vid4/ like |input |--video1 |----frame01.png |----frame02.png |----.... |--video2 |----frame01.png |----frame02.png |--... |original |--video1 |----frame01.png |----frame02.png |----.... |--video2 |----frame01.png |----frame02.png |--...

LoSealL commented 5 years ago

@AIRedWood You should specify the data when evaluating model:

python run.py --model=vespcn --test=vid4
python run.py --model=vespcn --infer=<some-data-folder>

Either is OK. You don't need to re-organize VID4 because the downloaded zip file is well organized. You only need to put images in the above structure for your own dataset.

P.S. Difference of --test and --infer

AIRedWood commented 5 years ago

@LoSealL the structure of vid4 in mnt/data/datastes is : |vid4 |--calendar |----bicubic |----input |----original |----output |--city |----bicubic |----input |----original |----output ... I run python run.py --model=vespcn --test=vid4

INFO:tensorflow:Restoring parameters from ../Results/vespcn/save/vespcn-sc4-ep1000.ckpt WARNING:tensorflow:frames is empty. [size=-1] Test: 0it [00:00, ?it/s]

WARNING:tensorflow:frames is empty. [size=-1] (dl) root@6d676553472f:/VSR/Train#

and then I re-organize VID4 like: |input |--video1 |----frame01.png |----frame02.png ... |--video2 |----frame01.png |----frame02.png ... |--video3 |----frame01.png |----frame02.png ... |--video4 |----frame01.png |----frame02.png ...

and video1 = calendar/input, video2 = city/input, video3 = foliage/input, video4 = walk/input,

then I run python run.py --model=vespcn --infer=/VSR/Train/input

Traceback (most recent call last): File "run.py", line 49, in tf.app.run(main) File "/usr/local/miniconda3/envs/dl/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run _sys.exit(main(argv)) File "run.py", line 38, in main return Run.run(additional_functions) File "/VSR/Train/VSR/Tools/Run.py", line 181, in run infer_loader = loader(infer_data, 'infer', infer_config) File "/VSR/Train/VSR/DataLoader/Loader.py", line 471, in init kwargs) File "/VSR/Train/VSR/DataLoader/Loader.py", line 140, in init self.prob = self._read_file(dataset)._calc_select_prob() File "/VSR/Train/VSR/DataLoader/Loader.py", line 226, in _calc_select_prob weights += [np.prod(f.shape) * f.frames] File "/VSR/Train/VSR/DataLoader/VirtualFile.py", line 378, in shape file = self.read_file[0] IndexError: list index out of range

It's always fail,can you tell me what wrong with that?

emm.... 就很难受。

LoSealL commented 5 years ago

@AIRedWood Oh, sorry for that, I think I mess up the VID4 zip file... Try organize vid4 like this: vid4 |-input |--calendar |----frame 001.png ... |--city |-original |--calendar ...

and use python run.py --model=vespcn --infer=vid4/input

AIRedWood commented 5 years ago

@LoSealL

OK I got it. Next I'll try to do more. and thank you again !

AIRedWood commented 5 years ago

@LoSealL I compute the mean PSNR of original and output(author of vespcn) is 24.4763 but the mean PSNR of original and infer(eval of this project) is just 19.5982.

and I find the number of pictures in original is n,the number in output is n-1,the number in infer is n-2

so I compared original [1,...,n-1] with output [1,...n-1]. and original [2,...,n-1] and infer[1,...,n-2],the results are as shown above.

so 1, I'm confused by the number of pictures 2,Why the mean PSNR is so low

by the way, I use the function :tf.image.psnr to compute PSNR

LoSealL commented 5 years ago

Hi @AIRedWood First of all, I have to mention that output in vid4 dataset is the CVPR 2011 paper "A Bayesian Approach to Adaptive Video Super Resolution".

Secondly, I don't pad frames for the 1st and last frame, as the depth is 3, so the output starts at the 2nd frames and ends at the 2nd last. ([2, n-1]) So in order to align the frames in metrics, you can remove the 1st and the last images before getting PSNR, and move back after that. I intend to add some argument to handle this...

At last, if frames are all aligned, and you may get a PSNR about 23+dB.

To know why PSNR may be lower than paper, please refer to my code: ImageProcess.py, read the comment of rgb_to_yuv and you'll know why: PSNR on Y channel only. You can use my eval mode by:

python run.py --mode=eval --reference_dir=... --input_dir=... --enable_psnr --l_only
python run.py --mode=eval --model=vespcn --checkpoint_dir=... --test=vid4 --enable_psnr --l_only

Both are OK. 1st command runs on existing files, while the 2nd runs on model checkpoint and generate test images on the air.