XinyiYing / D3Dnet

Repository for "Deformable 3D Convolution for Video Super-Resolution", SPL, 2020
Apache License 2.0
305 stars 43 forks source link

Is there any quick way to perform inference using the pretrained model on an input video or image file?? #23

Closed rushi-the-neural-arch closed 3 years ago

XinyiYing commented 3 years ago

For quick inference:

  1. prepare test dataset download your datasets and prepare test data in code/data as below:

    data
    └── dataset_1
         └── scene_1
               └── hr    
                  ├── hr_01.png  
                  ├── hr_02.png  
                  ├── ...
                  └── hr_M.png    
               └── lr_x4
                  ├── lr_01.png  
                  ├── lr_02.png  
                  ├── ...
                  └── lr_M.png   
         ├── ...          
         └── scene_M
    ├── ...    
    └── dataset_N      
  2. Compile deformable 3D convolution: Cd to code/dcn. For Windows users, run cmd make.bat. For Linux users, run bash make.sh. The scripts will build D3D automatically and create some folders. See code/dcn/test.py for example usage.

  3. perform inference python --dataset [dataset_1] test.py

rushi-the-neural-arch commented 3 years ago

Thank you very much for the quick reply! :) However, I just wanted to know that if I have a low-resolution input video/image (basically I cannot create a test dataset with an LR- HR image pair as you explained), is there any way I can perform inference directly on the LR image and get an HR image as output?? I have already compiled deformable 3D convolution and also made a test run for training the network.

Thanks!

XinyiYing commented 3 years ago

Thank you for your comment. We have added inference.py and adjust the dataset.py in our code to offer a direct inference on D3Dnet.

rushi-the-neural-arch commented 3 years ago

Thank you very much for providing the inference file! I will check it out soon!