Closed DenisSergeevitch closed 2 months ago
F
F...F... doesnt work at all...
Hello,
First of all, thank you for creating such a nice demo!
To identify the issue, we tested the reds4 020 sequence based on the Colab demo you provided. Our code can be found here.
The test compared the VSRDB results for 1) the original image sequence and 2) the image sequence obtained by decompressing a losslessly (qp=0) compressed video using ffmpeg. The results showed that the second approach (left side of the below image) produced significantly blurry and artifact-laden images.
This appears to be due to noise introduced during the compression and extraction process with ffmpeg, suggesting that we may need to adjust the ffmpeg options and re-run the experiment.
Furthermore, here are some possible reasons why FMA-Net might not be working well with your "old videos" setting:
To ensure good performance on your videos, it seems necessary to retrain or finetune FMA-Net specifically for old videos.
I will close this issue as there has been no further discussion. Please re-open the issue if there are additional comments.
Hello, and thank you for sharing FMA-Ne code. I have been waiting for the model for a while, as I personally love to apply ML tools to ancient historical videos.
Colab
I have made this colab with a reduced Vram usage via mixed precision for anyone who wants to try the FMA-Ne.
Issues
The problem that I encountered is that on the default model, results are almost the same blurry as if the frames were resized bicubically:
Here is the demo with a bit of blurry source video after x4 interpolation:
https://github.com/KAIST-VICLab/FMA-Net/assets/2140110/8df656f4-a810-4c6b-b711-7d409075e708
(left side was resized x4 bicubically)
Here is another example with the more damaged video and the processed x4 version. The FMA-Ne model made results more blurry after the processing:
https://github.com/KAIST-VICLab/FMA-Net/assets/2140110/0c9dbf0d-9ec1-4ac9-a58e-4ac983bc5e52
Frame-by-frame comparison: https://imgsli.com/MjY0MTQ3
I believe I am doing something wrong. Can you please point me in the right direction? I think those could be my issues:
1) Should I retrain the model for the "old videos" blur core? 2) The model is made for reducing 'motion blur', and I try to use it for something it was not made (for general deblurring). 3) My config could be wrong: