InternLM / lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
https://lmdeploy.readthedocs.io/en/latest/
Apache License 2.0
3.13k stars 280 forks source link

feat: auto set awq model_format from hf #1799

Closed zhyncs closed 1 week ago

zhyncs commented 1 week ago

Motivation

as titled

When the hf model is in awq format, there is no need to explicitly specify the model-format as awq.

# tested with https://huggingface.co/01-ai/Yi-6B-Chat-4bits
python3 -m lmdeploy serve api_server /workdir/Yi-6B-Chat-4bits

@lvhan028 @AllentDan May you help review this pr? Thanks.

Modification

The modification is simple. I didn't write it as a function, but if needed, I am more than happy to make the changes.

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
  3. If the modification has a dependency on downstream projects of a newer version, this PR should be tested with all supported versions of downstream projects.
  4. The documentation has been modified accordingly, like docstring or example tutorials.
zhyncs commented 1 week ago

pr_ete_test is ImportError. Just ignore it.

ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run pip install flash_attn