LoSealL / VideoSuperResolution

A collection of state-of-the-art video or single-image super-resolution architectures, reimplemented in tensorflow.
MIT License
1.61k stars 295 forks source link

Hi,How can i save the result image? #83

Closed Undercut closed 4 years ago

Undercut commented 5 years ago

I found the result images in the Results/model-name/test-datasets-name. But if i use <--infer >,where to find my result images?

LoSealL commented 5 years ago

Still under Results/model-name/infer-folder-name

Undercut commented 5 years ago

Still under Results/model-name/infer-folder-name

谢谢回答,我这里还有一个问题想请教,我在测试使用Vespcn-tensorflow获得超分辨率视频,15秒24帧720P的视频被我切分成200多张图片,这组图片序列通过Vespcn需要非常长的时间,大约1个多小时(在2600X上进行测试,未使用GPU加速),请问有没有什么办法能够加快预测的速度,使用Pytorch会有帮助吗? 同时我查阅了Vespcn那篇论文,原论文声称使用了动作补偿来加快视频超分辨率化的速度,但在这个模型上似乎并没有体现,请问是否有完整复现论文的模型?

LoSealL commented 5 years ago
  1. Pytorch's CPU implementation is quite slow.
  2. For VESPCN, it will process 3 frames to generate 1 output.
  3. Motion compensation is not for acceleration...
Undercut commented 5 years ago

Thank you, I got it. By the way, which model should i choose if i want a fast VSR.

LoSealL commented 5 years ago

FRVSR is highly recommended

Undercut commented 5 years ago

Thank you so much.

wwlCape commented 4 years ago

hi,when I test the vespcn-tensorflow, it can't save image to infer,can you give me some advice ? @Undercut @LoSealL It just return Test: 0it [00:00, ?it/s] my command is python eval.py dbpn -t vid4 --pretrain=../Results/vespcn/save

LoSealL commented 4 years ago

You should use python eval.py vespcn -t vid4 --pretrain=../Results/vespcn/save

wwlCape commented 4 years ago

Thanks, I see that you have change the '/VSR/Backend/TF/Framework/Trainer.py', it works for testing vespcn model. Thank you very much!

wenjianma commented 2 years ago

Hi, when I use python eval.py vespcn -t vid4 --pretrain=../Results/vespcn/save

The terminal reports some issues. I don't know how to solve it. Can you give me some advice?

Here is the report information: (TF2.1) D:\GP_AI\Super Resolution\VideoSuperResolution\Train>python eval.py vespcn -t vid4 --pretrain=../Results/vespcn/save 2022-05-07 20:01:40,213 INFO: LICENSE: VESPCN is proposed at CVPR2017 by Twitter. Implemented by myself @LoSealL. Traceback (most recent call last): File "eval.py", line 122, in main() File "eval.py", line 83, in main model.load(opt.pretrain) File "d:\gp_ai\super resolution\videosuperresolution\VSR\Backend\Torch\Models\Model.py", line 137, in load self.sequential_load(model, str(pth), map_location) File "d:\gp_ai\super resolution\videosuperresolution\VSR\Backend\Torch\Models\Model.py", line 148, in sequential_load state_dict = torch.load(pth, map_location=map_location) File "F:\Anaconda\envs\TF2.1\lib\site-packages\torch\serialization.py", line 699, in load with _open_file_like(f, 'rb') as opened_file: File "F:\Anaconda\envs\TF2.1\lib\site-packages\torch\serialization.py", line 231, in _open_file_like return _open_file(name_or_buffer, mode) File "F:\Anaconda\envs\TF2.1\lib\site-packages\torch\serialization.py", line 212, in init super(_open_file, self).init(open(name, mode)) PermissionError: [Errno 13] Permission denied: '../Results/vespcn/save'