intel / neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
https://intel.github.io/neural-compressor/
Apache License 2.0
2.18k stars 252 forks source link

Add Docstring for TF 3x API and Torch 3x Mixed Precision #1944

Closed zehao-intel closed 2 months ago

zehao-intel commented 2 months ago

Type of Change

documentation

Description

  1. Add Docstring for TF 3x API
  2. Add Docstring for Torch 3x Mixed Precision

How has this PR been tested?

PreCI

Dependency Change?

No

zehao-intel commented 2 months ago

Please update the scan path.

https://github.com/intel/neural-compressor/blob/master/.azure-pipelines/scripts/codeScan/pydocstyle/scan_path.txt

Added