hiyouga / LLaMA-Factory

Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
30.81k stars 3.8k forks source link

Qwen2-72B-Instruct-GPTQ-Int4量化模型Lora微调后,无法合并导出模型-Cannot merge adapters to a quantized model. #5171

Closed jym-coder closed 1 week ago

jym-coder commented 1 month ago

Reminder

System Info

image

Reproduction

Traceback (most recent call last): File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/gradio/queueing.py", line 571, in process_events response = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/gradio/blocks.py", line 1928, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/gradio/blocks.py", line 1526, in call_function prediction = await utils.async_iteration(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/gradio/utils.py", line 656, in async_iteration return await iterator.anext() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/gradio/utils.py", line 649, in anext return await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 859, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/gradio/utils.py", line 632, in run_sync_iterator_async return next(iterator) ^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/gradio/utils.py", line 815, in gen_wrapper response = next(iterator) ^^^^^^^^^^^^^^ File "/root/autodl-tmp/LLaMA-Factory/src/llamafactory/webui/components/export.py", line 103, in save_model export_model(args) File "/root/autodl-tmp/LLaMA-Factory/src/llamafactory/train/tuner.py", line 76, in export_model raise ValueError("Cannot merge adapters to a quantized model.") ValueError: Cannot merge adapters to a quantized model.

Expected behavior

No response

Others

No response

3215838277 commented 3 weeks ago

你好,我遇到了一样的问题,请问您解决了吗?

yangxue-1 commented 2 weeks ago

同问