wejoncy / QLLM

A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ, and export to onnx/onnx-runtime easily.
Apache License 2.0
145 stars 14 forks source link

Fix typos #115

Closed emphasis10 closed 5 months ago

emphasis10 commented 5 months ago

Hello! Thanks for opening nice repo. I expected that model weight file is .bin file, but it was still .safetensors. I think it can be fixed with this PR. Thank you!

wejoncy commented 5 months ago

Thanks. Really appreciate your contribution.

Could you please also help to fix the typo OptForCausalLM to OPTForCausalLM at https://github.com/wejoncy/QLLM/blob/47869d3371e1e3f367f97bbc7699836a55994791/qllm/quantization/sequential_layes_awq_config.py#L571C5-L571C20

emphasis10 commented 5 months ago

Thanks. Really appreciate your contribution.

Could you please also help to fix the typo OptForCausalLM to OPTForCausalLM at https://github.com/wejoncy/QLLM/blob/47869d3371e1e3f367f97bbc7699836a55994791/qllm/quantization/sequential_layes_awq_config.py#L571C5-L571C20

Done!

wejoncy commented 5 months ago

Thank you.