yangjianxin1 / Firefly

Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
5.83k stars 525 forks source link

qlora训练的一个报错 #16

Open nieallen opened 1 year ago

nieallen commented 1 year ago

ValueError: FP16 Mixed precision training with AMP or APEX (--fp16) and FP16 half precision evaluation (--fp16_full_eval) can only be used on CUDA devices. 请问这个错误怎么解决?

nieallen commented 1 year ago

解决了,去掉fp16的参数。 但是还有其它错误: ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details

createmomo commented 1 year ago

重新启动机器,再跑一跑试试?可能在调试那个错误的时候,已经占了一部分没释放。

nieallen @.***>于2023年6月5日 周一19:32写道:

解决了,去掉fp16的参数。 但是还有其它错误: ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check

https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details

— Reply to this email directly, view it on GitHub https://github.com/yangjianxin1/Firefly/issues/16#issuecomment-1576618833, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABCFXRKHR3Q5BM52WTF2UALXJW7TNANCNFSM6AAAAAAY2YTDPY . You are receiving this because you are subscribed to this thread.Message ID: @.***>

yangjianxin1 commented 1 year ago

把107行的device_map="auto"去掉,试一下 @nieallen

nieallen commented 1 year ago

重新启动机器,再跑一跑试试?可能在调试那个错误的时候,已经占了一部分没释放。 nieallen @.>于2023年6月5日 周一19:32写道: 解决了,去掉fp16的参数。 但是还有其它错误: ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details — Reply to this email directly, view it on GitHub <#16 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABCFXRKHR3Q5BM52WTF2UALXJW7TNANCNFSM6AAAAAAY2YTDPY . You are receiving this because you are subscribed to this thread.Message ID: @.>

你好,我无法重启机器呢,是云服务器我也没权限

nieallen commented 1 year ago

把107行的device_map="auto"去掉,试一下 @nieallen

试了还是一样的错误

yangjianxin1 commented 1 year ago

ValueError: FP16 Mixed precision training with AMP or APEX (--fp16) and FP16 half precision evaluation (--fp16_full_eval) can only be used on CUDA devices. 请问这个错误怎么解决?

这个看起来是apex的错误,我好像遇到过,可以尝试把apex卸载,pip uninstall apex。

yangjianxin1 commented 1 year ago

解决了,去掉fp16的参数。 但是还有其它错误: ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details

这个错误看起来有点眼熟,调试的时候我也遇到过,忘记了是不是版本问题,下面几个库需要使用源码安装。可以尝试一下:

pip install git+https://github.com/huggingface/peft.git
pip install git+https://github.com/huggingface/accelerate.git
pip install git+https://github.com/huggingface/transformers.git
pip install git+https://github.com/TimDettmers/bitsandbytes.git
aiXia121 commented 7 months ago

请问有人解决了吗?

ttyuuuuuuuuuu commented 5 months ago

注释了fp16:true这一行,解决!

SanJat-Lau commented 5 months ago

註釈了 device_map="auto",