-
SeeSR is amazing, and OSEDiff looks even better. Would be incredible if it could also process video frames and stay consistent between frames while improving the input image/frame.
Thank you for al…
-
Hello, thank you very much for your excellent work.
During training, controlnet0, controlnet2, unet1, and unet3 are saved. In the testing phase, I loaded controlnet2 and unet3, but the resulting su…
-
Can You add a license? If possible, I would like a license that can be used for a wide range of purposes, like SeeSR.
-
Have you done any data enhancement on the training data? I just fine-tuned the seesr model for 1k iterations, and the subjective quality of the image has obviously deteriorated. The specific manifesta…
-
请问有关'bert-base-uncased'无法导入的问题如何解决?
-
![image](https://github.com/NJU-PCALab/AddSR/assets/46629149/3f1b8445-31d2-4f51-b377-e568867854d1)
it seems need a bert model pre_weight? how to play? thanks
-
python utils_data/make_paired_data.py \
--gt_path PATH_1 PATH_2 ... \
--save_dir preset/datasets/train_datasets/training_for_dape \
--epoch 1
这里保存目录是不是应该改成preset/datasets/train_datasets/training_f…
-
Thanks for your greatwork. I put the weight in the dir like this:
/SeeSR-main/preset/models
--DAPE.pth
--seesr
--stable-diffusion-2-base
And run the command:
python test_seesr.py \
--pr…
-
Hello, thank you for sharing the code of SeeSR!
When I read it, I found it did not seem to perform cross attention between the 'ram_encoder_hidden_states' and the resnet output during training.
The…
-
*FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring* 这篇paper的链接放成了 *SeeSR: Towards Semantics-Aware Rea…