XPixelGroup / HAT

CVPR2023 - Activating More Pixels in Image Super-Resolution Transformer Arxiv - HAT: Hybrid Attention Transformer for Image Restoration
Apache License 2.0
1.23k stars 151 forks source link

Half precision (fp16) Training and Inference of the HAT models #127

Open AmirMohamadBabaee opened 8 months ago

AmirMohamadBabaee commented 8 months ago

Hello, Thank you for sharing your fantastic work. I'm curious if you have any intentions to share the training and inference configurations using fp16 to accommodate limited GPU memories.

chxy95 commented 8 months ago

@AmirMohamadBabaee Haven't tried it yet~

AmirMohamadBabaee commented 8 months ago

If such plans exist, I kindly request further details regarding these configurations. However, should this not be part of your current roadmap, I would greatly value your insights into how I might implement it independently.

chxy95 commented 8 months ago

@AmirMohamadBabaee To be honest, I'm inexperienced in doing this...