Closed ssskhan closed 1 year ago
I haven't tried quantization on VToonify before. I have no idea of it. So I'm afraid I cannot tell you whether it's the expected behavior or something is wrong.
I want to reduce memory consumption without reducing batch size. I tried mixed precision, and further to match all tensors I have made a standard for every picture like *320320 so all tensors are of the same size and allocate minimum memory but still it is taking too much memory**. Can you share some tips on how can I reduce the memory consumption?
I'm not familiar with model compression. I only know there is a technology called model distillation, to train a student network to approach the teacher network.
Ok. Thank you.
Is gradual increase in memory expected behavior? The memory keeps increasing . I started with 4gb GPU then switched to 8gb and now 12gb but after like every other or couple of pictures the memory is increased and keep increasing by a few 100 MBs until it allocates all the memory. How to get rid of these memory leaks?
it's the expected behavior
Hi. I tried dynamic quantization of a pretrained model but the the images are of very bad quality. Is it the expected behavior or I did something wrong? I just want to reduce memory consumption without limiting batch size.
Here's my quantization code:
And here is the output image: