An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Describe the bug:
As far as I know, most ptq-quantization methods needn't train again but it seems that the current ptq-quantization of nni must run with the training process. It would cost too many time to train again. Is there any way to run the ptq-quantization of nni without training?
Describe the bug: As far as I know, most ptq-quantization methods needn't train again but it seems that the current ptq-quantization of nni must run with the training process. It would cost too many time to train again. Is there any way to run the ptq-quantization of nni without training?
Environment:
Reproduce the problem
Code|Example:
How to reproduce: