Open myBigbug opened 3 weeks ago
Another question, can you guys (i mean authors) share the quantize scripts? we need the script after sft this model.
Another question, can you guys (i mean authors) share the quantize scripts? we need the script after sft this model.
Another question, can you guys (i mean authors) share the quantize scripts? we need the script after sft this model.
Another question, can you guys (i mean authors) share the quantize scripts? we need the script after sft this model.
我也想知道是如何量化的,请问你得到了吗
Another question, can you guys (i mean authors) share the quantize scripts? we need the script after sft this model.
Another question, can you guys (i mean authors) share the quantize scripts? we need the script after sft this model.
Another question, can you guys (i mean authors) share the quantize scripts? we need the script after sft this model.
我也想知道是如何量化的,请问你得到了吗
No, I'm still waiting
可以啊
在finetune_.lora.sh 改成 MODEL="openbmb/MiniCPM-Llama3-V-2_5-int4"
--tune_vision false --deepspeed ds_config_zero3.json
就可以了
可以啊
在finetune_.lora.sh 改成 MODEL="openbmb/MiniCPM-Llama3-V-2_5-int4"
--tune_vision false --deepspeed ds_config_zero3.json
就可以了
@nickyisadog 我是使用finetune_ds.sh脚本不是lora脚本微调int-4模型得到了报错,ValueError: You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co/docs/transformers/peft for more details 。 请帮忙分析这是什么原因导致的?
@nickyisadog
I am facing this error, RuntimeError: Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
I ran with these changes:
In finetune_.lora.sh, change MODEL="openbmb/MiniCPM-Llama3-V-2_5-int4"
--tune_vision false --deepspeed ds_config_zero3.json
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
因为MiniCPM-Llama3-V 2.5 支持微调,但是显卡内存只有24GB,不够使用,所以MiniCPM-Llama3-V 2.5 int4支持微调吗? 目前我微调会得到报错ValueError: You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co/docs/transformers/peft for more details
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
备注 | Anything else?
No response