VITA-Group / AGD

[ICML2020] "AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks" by Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, Zhangyang Wang
MIT License
104 stars 19 forks source link

about quantize #7

Closed shenshen0318 closed 4 years ago

shenshen0318 commented 4 years ago

In quantize.py, I find QConv2d, QLinear and QConvTranspose2d ( I guess they are quantized Conv2d, Linear and ConvTranspose2d). But in model_infe.py, I just find quantize's value is True, but i don't know how to construct a model using them. Furthermore, i want to build a Dis contains QConv2d(model is a list,like[nn.ReflectionPad2d(padding), QConv2d, nn.LeakyReLU(inplace=True)]), my forward function just write return self.model(x) is enough?

tilmto commented 4 years ago

If you set C.quantize = True in config_train.py, the model will be trained with default 8 bit. You can change the default bit-width of the class QConv2d in quantize.py. I think the self.model(x)is enough for your case.

shenshen0318 commented 4 years ago

Thank you for your reply!!!Another one question, the file of operations.py is your custom class(like Conv, ConvTranspose2dNorm, ConvNorm), they are quantized and used to build NAS_GAN?

tilmto commented 4 years ago

Yes, they are the basic searchable blocks to build the supernet and will be used to construct the final derived architecture. But they will be quantized only if you set C.quantize = True in config_train.py, otherwise they will be full precision ones.

shenshen0318 commented 4 years ago

Thanks!I have no question at present. You could close this issue.