Closed shenshen0318 closed 4 years ago
If you set C.quantize = True
in config_train.py
, the model will be trained with default 8 bit. You can change the default bit-width of the class QConv2d
in quantize.py
. I think the self.model(x)
is enough for your case.
Thank you for your reply!!!Another one question, the file of operations.py is your custom class(like Conv, ConvTranspose2dNorm, ConvNorm), they are quantized and used to build NAS_GAN?
Yes, they are the basic searchable blocks to build the supernet and will be used to construct the final derived architecture. But they will be quantized only if you set C.quantize = True
in config_train.py
, otherwise they will be full precision ones.
Thanks!I have no question at present. You could close this issue.
In quantize.py, I find QConv2d, QLinear and QConvTranspose2d ( I guess they are quantized Conv2d, Linear and ConvTranspose2d). But in model_infe.py, I just find quantize's value is True, but i don't know how to construct a model using them. Furthermore, i want to build a Dis contains QConv2d(model is a list,like[nn.ReflectionPad2d(padding), QConv2d, nn.LeakyReLU(inplace=True)]), my forward function just write return self.model(x) is enough?