intel / neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
https://intel.github.io/neural-compressor/
Apache License 2.0
2.18k stars 252 forks source link

update main page #1973

Closed chensuyue closed 1 month ago

chensuyue commented 1 month ago

Type of Change

feature or bug fix or documentation or validation or others
API changed or not

Description

  1. update main page
  2. fix fp8 ut coverage

Expected Behavior & Potential Risk

the expected behavior that triggered by this PR

How has this PR been tested?

how to reproduce the test (including hardware information)

Dependency Change?

any library dependency introduced or removed