modelscope / ms-swift

Use PEFT or Full-parameter to finetune 400+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)
https://swift.readthedocs.io/zh-cn/latest/Instruction/index.html
Apache License 2.0
4.17k stars 369 forks source link

采用qlora,使用的是Qwen-7b-chat,本地数据集,出现这样的报错。我之前用lora的方式是OK的。 #127

Closed Jethu1 closed 11 months ago

Jethu1 commented 1 year ago

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/train_file//opensource/swift-main/examples/pytorch/llm/llm_sft.py", line 2, in import custom File "/train_file/xx/opensource/swift-main/examples/pytorch/llm/custom.py", line 8, in from swift.llm import (ConversationsPreprocessor,QueryPreprocessor, LoRATM, Template,TemplateType, File "/train_file/xxx/opensource/swift-main/swift/llm/init.py", line 2, in from .infer import llm_infer File "/train_file/xx/opensource/swift-main/swift/llm/infer.py", line 6, in from modelscope import BitsAndBytesConfig, GenerationConfig File "", line 1075, in _handle_fromlist File "/opt/conda/lib/python3.10/site-packages/modelscope/utils/import_utils.py", line 422, in getattr module = self._get_module(self._class_to_module[name]) File "/opt/conda/lib/python3.10/site-packages/modelscope/utils/import_utils.py", line 441, in _get_module raise RuntimeError( RuntimeError: Failed to import modelscope.utils.hf_util because of the following error (look up to see its traceback): Failed to import transformers.modeling_utils because of the following error (look up to see its traceback): Failed to import transformers.generation.utils because of the following error (look up to see its traceback):

    CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
Jintao-Huang commented 1 year ago

看起来是您的机器不支持bitsandbytes. 你试试 qwen-7b-chat-int4. 使用的是auto_gptq量化.