thank you for this amazing work! The flexibility of this method is amazing.
However, textual inversion is very slow for me despite running it on a A100 it takes 10 seconds per step, while I would rather expect something like 10 steps per second as with normal diffusion training. Did you also encounter problems like that or have a guess why this happens?
Sorry for the late response. How much time it is expected for it to train? Have you encountered data loading bottlenecks? Are you using any mini_batch_size?
Hi,
thank you for this amazing work! The flexibility of this method is amazing.
However, textual inversion is very slow for me despite running it on a A100 it takes 10 seconds per step, while I would rather expect something like 10 steps per second as with normal diffusion training. Did you also encounter problems like that or have a guess why this happens?
Best regards, Sidney