gwang-kim / DiffusionCLIP

[CVPR 2022] Official PyTorch Implementation for DiffusionCLIP: Text-guided Image Manipulation Using Diffusion Models
Other
785 stars 113 forks source link

GPU VRAM load slowly increases #19

Open GiannisPikoulis opened 1 year ago

GiannisPikoulis commented 1 year ago

Hello.

Thank you for your code. I am executing clip_finetune() on CelebA-HQ-256x256 and I am monitoring my GPU VRAM usage. I am noticing a gradual increase in VRAM while precomputing the latents from the given dataset. Is this normal? Also which part of the code is responsible for this behavior. I thought VRAM usage should remain steady throughout the training procedure and reach its peak from the beginning. Also given this behavior, for a large enough number of n_precomp_img this will eventually lead in memory overflow which is definitely not desired.

Thanks in advance.

hniksoleimani commented 1 year ago

it happened to me and the fineuning process stopped, also using --clip_finetine_eff eventually leads to another error:

"ValueError: Expected tensor to be a tensor image of size (C, H, W). Got tensor.size() = torch.Size([1, 3, 256, 256])."