The samplers call the UNet as model([x, x], [t, t], [conditioning, unconditional_conditioning]). This is equivalent, resource wise, to using a batch size that is twice the true batch size.
This PR splits this call into two calls, one for the normal conditioning and one for the unconditional conditioning. This can theoretically reduce the VRAM usage by up to 50% (since during sampling, most VRAM is allocated to the UNet).
I haven't profiled VRAM usage (not sure how), so I don't have precise numbers. If someone else knows how I'd be interested to hear. In terms of performance, on my machine (Quadro RTX4000) inference time at 5x512x512 went from 2.2s/it to 3.5s/it.
The samplers call the UNet as
model([x, x], [t, t], [conditioning, unconditional_conditioning])
. This is equivalent, resource wise, to using a batch size that is twice the true batch size.This PR splits this call into two calls, one for the normal conditioning and one for the unconditional conditioning. This can theoretically reduce the VRAM usage by up to 50% (since during sampling, most VRAM is allocated to the UNet).
I haven't profiled VRAM usage (not sure how), so I don't have precise numbers. If someone else knows how I'd be interested to hear. In terms of performance, on my machine (Quadro RTX4000) inference time at 5x512x512 went from 2.2s/it to 3.5s/it.