kkkls / FFTformer

[CVPR 2023] Effcient Frequence Domain-based Transformer for High-Quality Image Deblurring
MIT License
255 stars 18 forks source link

Finetune using single GPU #25

Closed riestiyazain closed 6 months ago

riestiyazain commented 8 months ago

Dear authors, thank you for open sourcing the code. I have only 1 4080 GPU laptop, is it possible to finetune the pretrained weights using current resource?

kkkls commented 7 months ago

Hello, due to the limited 16GB VRAM on the 4080, you can try using a patch size of 128x128 and a batch size of 2 as a possible solution.

riestiyazain commented 7 months ago

Thank you for the fast reply. Do you have any advice on the hyperparameters I can use to finetune the model?

kkkls commented 7 months ago

Thank you for the fast reply. Do you have any advice on the hyperparameters I can use to finetune the model?

You can try fine-tuning with a learning rate of 1e-4, patch size of 128x28, and a batch size of 2.