Closed pskoko closed 1 year ago
@AlexKoff88, @KodiaqQ, please help.
Hi @pskoko! Thanks for noticing that. This is an experimental feature that we would like to implement but still cannot make it work correctly due to resource constraints. If you are experienced in that feel free to fix it and contribute. Since it is experimental we don't have strong acceptance criterium for contributions. For now, I cannot give any timeline for when we get back to it, unfortunately.
This issue will be closed in 2 weeks in case of no activity.
Ref. 111486
@pskoko we don't think the _use_layerwisetuning was ever a mature feature, an initial version was actually implemented for experiments. May I ask how important is this feature for you and what are you trying to achieve with it? Perhaps we could help you to achieve the required accuracy without using this feature/parameter.
Please also note that POT will be deprecated since v2023.0 and we highly recommend to use NNCF instead (docs here). Though _use_layerwisetuning is not available in NNCF, there is a chance to get the required accuracy without this parameter.
@avitial Hi, thanks for response. I managed to use accuracy aware quantization to achieve results I want.
System information (version)
Detailed description
Performing posttraining quantization with DefaultQuantization algorithm and use_layerwise_tuning parameter set to True, quantization fails with the following error:
OpenVINO distriubution is from pip for python 3.7: openvino_dev-2022.2.0-7713-py3-none-any.whl . The same issue is with python3.8 or openvino 2022.1.0.
Steps to reproduce
Download resnet-50-pytorch model using _omzdownloader --name resnet-50-pytorch and convert it to OpenVINO IR using _omzconverter --name resnet-50-pytorch, and download some images for qunatization (for example https://github.com/fastai/imagenette)
Using DataLoader for Images from https://docs.openvino.ai/latest/pot_default_quantization_usage.html#prepare-data-and-dataset-interface and qunatization code from https://docs.openvino.ai/latest/pot_default_quantization_usage.html#run-quantization create quantization script
Set use_layerwise_tuning to true in algorithm parameters:
Run quantization script
In there is example script for reproducing errors, without model and sample images since github allows files up to 25MB. example.txt
Issue submission checklist