openvinotoolkit / nncf

Neural Network Compression Framework for enhanced OpenVINO™ inference
Apache License 2.0
922 stars 229 forks source link

`StatisticsNotCollectedError` when running optimize stable diffusion #1997

Closed zero-nnkn closed 1 year ago

zero-nnkn commented 1 year ago

When I use optimum to optimize stable diffusion based on train_text_to_image_qat.py.

python train_text_to_image_qat.py \
    --ema_device="cpu" \
    --use_kd \
    --model_id="svjack/Stable-Diffusion-Pokemon-en" \
    --center_crop \
    --random_flip \
    --dataloader_num_workers=1 \
    --dataset_name="lambdalabs/pokemon-blip-captions" \
    --max_train_steps=8000 \
    --opt_init_steps=300 \
    --tome_ratio=0.5 \
    --quantization_mode="aggressive" \
    --mixed_precision="fp16" \
    --output_dir=sd-quantized-pokemon

The problem is when I use --mixed_precision="fp16" (in Google Colab and Kaggle), I get the StatisticsNotCollectedError error in nncf.torch.create_compressed_model function. My NNCF version: 2.5.0. Can anyone help me?

Traceback (most recent call last):
  File "/content/train_text_to_image_qat.py", line 1148, in <module>
    main()
  File "/content/train_text_to_image_qat.py", line 972, in main
    compression_controller, unet = create_compressed_model(unet, nncf_config)
  File "/usr/local/lib/python3.10/dist-packages/nncf/telemetry/decorator.py", line 71, in wrapped
    retval = fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/nncf/torch/model_creation.py", line 117, in create_compressed_model
    compressed_model = builder.apply_to(nncf_network)
  File "/usr/local/lib/python3.10/dist-packages/nncf/torch/compression_method_api.py", line 123, in apply_to
    transformation_layout = self.get_transformation_layout(model)
  File "/usr/local/lib/python3.10/dist-packages/nncf/torch/compression_method_api.py", line 142, in get_transformation_layout
    layout = self._get_transformation_layout(model)
  File "/usr/local/lib/python3.10/dist-packages/nncf/torch/quantization/algo.py", line 626, in _get_transformation_layout
    self._pt_quantizer_setup = self._get_quantizer_setup(target_model)
  File "/usr/local/lib/python3.10/dist-packages/nncf/torch/quantization/algo.py", line 721, in _get_quantizer_setup
    stats_for_range_init = self._get_statistics_for_final_range_init(
  File "/usr/local/lib/python3.10/dist-packages/nncf/torch/quantization/algo.py", line 694, in _get_statistics_for_final_range_init
    return self.get_statistics_for_quantizer_setup(target_model, quantizer_setup, range_init_params)
  File "/usr/local/lib/python3.10/dist-packages/nncf/torch/quantization/algo.py", line 688, in get_statistics_for_quantizer_setup
    retval[ip] = {rs: collector.get_statistics() for rs, collector in rs_vs_collector.items()}
  File "/usr/local/lib/python3.10/dist-packages/nncf/torch/quantization/algo.py", line 688, in <dictcomp>
    retval[ip] = {rs: collector.get_statistics() for rs, collector in rs_vs_collector.items()}
  File "/usr/local/lib/python3.10/dist-packages/nncf/common/tensor_statistics/collectors.py", line 66, in get_statistics
    raise StatisticsNotCollectedError()
nncf.common.tensor_statistics.collectors.StatisticsNotCollectedError
alexsu52 commented 1 year ago

@KodiaqQ, please take a look.

AlexanderDokuchaev commented 1 year ago

Hi @zero-nnkn Issue in NNCF was fixed in https://github.com/openvinotoolkit/nncf/pull/2021. But script still falls with another issue on export to onnx RuntimeError: "LayerNormKernelImpl" not implemented for 'Half', it's not depends from NNCF, and reproduced without NNCF.

In next release of OpenVINO, will be available feature to convert pytorch model to IR without converting to onnx. It should be make possible to use fp16 option.

zero-nnkn commented 1 year ago

@AlexanderDokuchaev, thank you for your support. Did you make any changes to the original script?

AlexanderDokuchaev commented 1 year ago

Yes, https://github.com/huggingface/optimum-intel/pull/401 I`ve updated train_text_to_image_qat.py, but training will work without it, required only NNCF fix.