caiyuanhao1998 / Retinexformer

"Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement" (ICCV 2023) & (NTIRE 2024 Challenge)
https://arxiv.org/abs/2303.06705
MIT License
880 stars 74 forks source link

Information about training resources? #85

Closed Koruvika closed 4 months ago

Koruvika commented 4 months ago

Hi, I'm planning to retrain Retinexformer on my dataset, but I don't know your resources for training, how much VRAM used for training with configs in your paper?

caiyuanhao1998 commented 4 months ago

Hi, thanks for your interests.

I suggest you train our model on 3090 for fivek and lol-v2-real datasets and train on rtx 8000 for other datasets.

if you train our model at the spatial size of 256x256 like lol, fivek, sdsd, side. The memory usage would be 2-4 G

if you train our model at the spatial size of 960x512 like Sid the memory usage would be over 40 G.

if you train our model on the ntire datasets with our multi-gpu training and mixed-precision training. The VRAM is about 48 g

If you find our repo useful, please help us star it. Thank you.