Closed FrontierBreaker closed 12 months ago
Thanks!
The pretrained weight of DA-CLIP is here and you can use it to generate degradation embeddings and clean image embeddings as the example code.
In addition, I'm planning to release the training code for DA-CLIP later this month.
what's the gpu memory requirement for the training? (I only have a 12GB GPU but I would like to try it!
I think this paper would be impactful for the community. : )
Thanks again for your interest! I train the DA-CLIP using 4 A100 GPUs (large batch size due to the contrastive learning) and an A100 for the downstream image restoration training. The training details can be found in the paper (Appendix B.1).
Only a 12 GB GPU might be hard to train this model (but of course it is possible to change the batch size, patch size, and model parameters to fit your computer). Moreover, I will provide pretrained weights for the downstream diffusion model so you can easily test it.
Got that. Thanks for your timely reply and suggestions! By the way, how about the training time on 4*A100 and the per-GPU memory cost for your experiments?
The training takes 3 hours and almost all GPU memories. :)
But the batch size is 784 x 4. You can try a smaller batch size such as 64 which would also work well.
Got that. Thank you!
Awesome work!