Closed mdarhdarz closed 1 year ago
Hi, for every training example, we only denoise 1 image. Thus, we are denoising 24 images per step. In comparison, for every sample in inference, we need to denoise 16 images, so we are denoising 16*4=64 images for one inference step.
Can we view sample_num equal to batch size in training? In paper, training batch size is 24 per 40G gpu , but sample_num is up to 4 on a 48G gpu. How can I enable larger sample_num in inference?