hzwer / WACV2024-SAFA

WACV2024 - Scale-Adaptive Feature Aggregation for Efficient Space-Time Video Super-Resolution
MIT License
127 stars 12 forks source link

On the issue of PNSR being considerably reduced. #8

Open hiroesta opened 1 week ago

hiroesta commented 1 week ago

Hello author.

The following codes and options were used for the training. (Code rewritten to work with that option, otherwise unchanged) python3 -m torch.distributed.launch --nproc_per_node=1 train.py --world_size=1 --batch-size=4 The result was a pnsr of about 10 at 4k steps. There is a considerable discrepancy with the results in the paper, do you know why?

hiroesta commented 1 week ago

I used adobe240fps dataset for this training.

hzwer commented 1 week ago

The learning rate for this type of work is usually set at a relatively high threshold. If using a single card (implying a smaller batch size), it is often necessary to reduce the learning rate. You need to monitor the loss function curve to ensure that the training does not collapse due to an excessively high learning rate.