Open lzhhha opened 1 year ago
Hello@Ree1s, really nice work! I also have the same question, and look forward to your reply! In 'configs/ffhq_liifsr3_scaler_16_128.json', the batchsize is set to 32 per GPU. In run.sh, the number of gpus for training is 4. Is the batchsize 32x4 during the training stage? How long is the training time? Thank you very much!
Hello @lzhhha. Have you solved this problem? Thanks!
What I want to ask is, did anyone run the tests successfully? I commented out the training code and only ran the test code. The images and metrics I got were terrible. I wonder what went wrong
@XLR-man how about the code is, can you show your files tree.
Best regards! tongchangD
What is the batch size and eopch when training 1616->128128 SRs on two 24GB NVIDIA RTX A5000 GPUs on the FFHQ dataset? How many days did you train at a time? Thanks.