Open thomasave opened 10 months ago
Hi,
Thanks for your interest in our project.
I will take a look into your proposed situation combined with quantization and pruning. If it is a bug, I will try to fix it. Furthermore, I would like to note that our most pruning process requires an additional training process, which may cause weight shift and make your aforementioned quantization process invalid. If you have any further questions, please do not hesitate to contact us.
Best, Frank
Hi,
Thank you for looking into this! I am indeed aware that the pruning process would require additional training, but would it not be possible to do this training quantization-aware? It would not be a problem that the low-precision model weights would shift during the pruning process, that was more my intention actually.
Kind regards, Thomas
Hello Thomas,
Thanks for your further information. I think that combining quantization and pruning in one training process seems to make sense. I will investigate this situation and find out if we currently support this configuration or not.
Best, Frank
Hello,
I'm attempting to train a model for a micro-controller that only supports 8-bit precision or lower. This works perfectly when training using your
QuantizationAwareTrainingConfig
. In addition to this we also want to prune the network to also reduce the number of parameters in our model. Luckily, theprepare_compression
method accepts multiple configurations to be passed to it, so I attempted to also introduce aWeightPruningConfig
. This fails however with the following traceback:I was wondering if this is supposed to be a supported use case, and I'm doing something wrong, or is combining multiple compression methods not yet supported?
The following code can be used to minimally reproduce the error: