Research to reduce the size of the GAN generator. A combination of distillation, quantization, and pruning. Quantization is usually non-differentiable, but they use pseudo-gradient to enable E2E training. They have successfully reduced the size of an existing model to 1/47th of its original size.
TL;DR
Research to reduce the size of the GAN generator. A combination of distillation, quantization, and pruning. Quantization is usually non-differentiable, but they use pseudo-gradient to enable E2E training. They have successfully reduced the size of an existing model to 1/47th of its original size.
Why it matters:
Paper URL
https://arxiv.org/abs/2008.11062
Submission Dates(yyyy/mm/dd)
2020/08/25
Authors and institutions
Haotao Wang, Shupeng Gui, Haichuan Yang, Ji Liu, Zhangyang Wang
Methods
Results
Comments
ECCV 2020