LoSealL / VideoSuperResolution

A collection of state-of-the-art video or single-image super-resolution architectures, reimplemented in tensorflow.
MIT License
1.61k stars 295 forks source link

where should the training files be placed? #5

Closed xia00100 closed 5 years ago

xia00100 commented 5 years ago

I'm really appreciate your sharing of this project . But I just started using Tensorflow, so May i ask where should the training files be placed? and I already download 91-Image/SET 5/SET 14 etc. but I can't run it success!

xia00100 commented 5 years ago

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'label/hr' with dtype float and shape [?,?,?,1] [[node label/hr (defined at /.../Train/VSR/Framework/SuperResolution.py:124) = Placeholder[dtype=DT_FLOAT, shape=[?,?,?,1], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]] [[{{node loss/Const_2/_39}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_279_loss/Const_2", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

LoSealL commented 5 years ago

@xia00100 Hi, you can check datasets.yaml.

let's say you download 91-image.tar.gz and decompress 91 images into /mnt/data/datasets/91-image, you just modify root and Path: 91-image sections in datasets.yaml. If you are familiar with Path.glob() function, it should be easy to understand. After 'Root' and 'Path' are correctly set, combined them in Dataset section as I did.

At last, VSR can automatically parse datasets' files and start training.

LoSealL commented 5 years ago

Currently, try training ESPCN for a good to start. If you want to train SRCNN and VDSR, currently add --add_custom_callbacks=upsample as an argument for run.py.

Iatboo commented 5 years ago

Currently, try training ESPCN for a good to start. If you want to train SRCNN and VDSR, currently add --add_custom_callbacks=upsample as an argument for run.py. Hi, thank you for sharing your project. Which dataset did you use for training VESPCN? I have trained VESPCN with my own datasets, but the performance was not so good. Could you share your results? Thanks again.

LoSealL commented 5 years ago

@Iatboo Did you use your VESPCN or my version? In paper, the author claimed optical flow ranges (-1, 1), representing movement across the entire image, which however leads to quite poor results. So instead I inteprete optical flow ranges (-1, 1) as the absolute pixel, which means the movement is restricted up to 1 pixel. By doing that way, my result is around 25.5dB (VID4), better than bicubic, EDSR, worse than Bayesian.

I trained with MCL-V and GOPRO dataset, you can find the download link in README file.

The pre-trained weights is on-the-way, please wait for some days (or some months...)

Iatboo commented 5 years ago

@LoSealL Thank you very much for your detailed answer.

LoSealL commented 5 years ago

closed since no more questions here