Open jiangjin1246 opened 10 months ago
The training details are outlined below:
Step 1: Achieve ROHQD. We fine-tune the model provided in taming-transformers using FFHQ with an input size of 512x512. It is important to note that the pre-trained model is trained with an input size of 256x256. This step consists of 180 epochs and takes approximately 8 days.
Step 2: Obtain RestoreFormer++. Initially, we freeze the decoder and train the encoder exclusively with degraded-clear pairs. After about 100 epochs, we fine-tune both the encoder and decoder together for approximately 50 epochs. This process takes around 10 days.
Thank you for your detailed answer! It really helps! By the way, what kind of GPUs are utilized, and how many GPUs are needed to attain such training efficiency?
4 V100s with batchsize = 4 for each gpu.
Hi, @wzhouxiff Thanks for sharing your great work I want to ask a question:
"/group/30042/zhouxiawang/checkpoints/RestoreFormer/release/logs/2022-11-11T18-36-57_ROHQDROHQD_gpus7_lmdb_h4_seed30/checkpoints/last.ckpt.62"
Hi, thanks for your remarkable work. I've several questions as follows:
How many epochs was the RestoreFormer++ model trained?
How long does it take to train the RestoreFormer++ model?
Thanks again and look forward to your feedback.