modelscope / ms-swift

Use PEFT or Full-parameter to finetune 300+ LLMs or 80+ MLLMs. (Qwen2, GLM4v, Internlm2.5, Yi, Llama3.1, Llava-Video, Internvl2, MiniCPM-V-2.6, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
https://swift.readthedocs.io/zh-cn/latest/Instruction/index.html
Apache License 2.0
3.38k stars 283 forks source link

lora微调meta-llama3-8b-chat报错 #781

Closed catundchat closed 4 months ago

catundchat commented 4 months ago

Describe the bug 运行微调命令

$env:PYTHONPATH="C:\Users\Administrator\AppData\Local\Programs\Python\Python310\python.exe"
$env:CUDA_VISIBLE_DEVICES="0"
python llm_sft.py `
    --model_type qwen1half-moe-a2_7b-chat `
    --model_id_or_path "D:\models\Qwen1.5-MoE-A2.7B-Chat" `
    --sft_type lora `
    --tuner_backend peft `
    --dtype AUTO `
    --output_dir "D:\github\swift\output" `
    --train_dataset_sample -1 `
    --num_train_epochs 3 `
    --max_length 1024 `
    --check_dataset_strategy warning `
    --use_loss_scale true `
    --lora_rank 8 `
    --lora_alpha 32 `
    --lora_dropout_p 0.05 `
    --lora_target_modules ALL `
    --gradient_checkpointing true `
    --batch_size 1 `
    --weight_decay 0.1 `
    --learning_rate 5e-5 `
    --gradient_accumulation_steps 16 `
    --max_grad_norm 1.0 `
    --warmup_ratio 0.03 `
    --eval_steps 50 `
    --save_steps 50 `
    --save_total_limit 3 `
    --logging_steps 10 `
    --use_flash_attn false `
    --self_cognition_sample 1000 `
    --model_name 风语诗人 'FengPoet' `
    --model_author 吴文俊 'Wendy' `
    --custom_train_dataset_path "D:\dataset\poems_processed\rhyme\train_d1.jsonl" `
    --custom_val_dataset_path "D:\dataset\poems_processed\rhyme\val_d1.jsonl"

报错如下:

Traceback (most recent call last):
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\swift\cli\sft.py", line 5, in <module>
    sft_main()
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\swift\utils\run_utils.py", line 25, in x_main
    args, remaining_argv = parse_args(args_class, argv)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\swift\utils\utils.py", line 98, in parse_args
    args, remaining_args = parser.parse_args_into_dataclasses(
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\hf_argparser.py", line 338, in parse_args_into_dataclasses
    obj = dtype(**inputs)
  File "<string>", line 135, in __init__
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\swift\llm\utils\argument.py", line 296, in __post_init__
    set_model_type(self)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\swift\llm\utils\argument.py", line 1029, in set_model_type
    raise ValueError(
ValueError: model_id_or_path: 'D:\models\Meta-Llama-3-8B-Instruct' is not registered. Please set `--model_type <model_type>` additionally.

Your hardware and system info Write your system info like CUDA version/system/GPU/torch version here(在这里给出硬件信息和系统信息,如CUDA版本,系统,GPU型号和torch版本等) cuda12.4, win10系统, RTX A5000, torch 2.0.1+cu118

Jintao-Huang commented 4 months ago

--model_type llama3-8b-instruct --model_id_or_path: 'D:\models\Meta-Llama-3-8B-Instruct'

catundchat commented 4 months ago

我重新拉了下代码又安装完毕,运行加了--model_type llama3-8b-instruct的命令后报错:

Traceback (most recent call last):
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\swift\cli\sft.py", line 5, in <module>
    sft_main()
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\swift\utils\run_utils.py", line 25, in x_main
    args, remaining_argv = parse_args(args_class, argv)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\swift\utils\utils.py", line 98, in parse_args
    args, remaining_args = parser.parse_args_into_dataclasses(
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\hf_argparser.py", line 338, in parse_args_into_dataclasses
    obj = dtype(**inputs)
  File "<string>", line 135, in __init__
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\swift\llm\utils\argument.py", line 296, in __post_init__
    set_model_type(self)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\swift\llm\utils\argument.py", line 1046, in set_model_type
    raise ValueError(f"model_type: '{args.model_type}' is not registered. "
ValueError: model_type: 'llama3-8b-instruct' is not registered. The model_type you can choose: ['c4ai-command-r-plus', 'c4ai-command-r-v01', 'mengzi3-13b-base', 'baichuan-7b', 'baichuan-13b-chat', 'xverse-moe-a4_2b', 'xverse-7b', 'xverse-7b-chat', 'xverse-13b-256k', 'xverse-65b-chat', 'xverse-65b-v2', 'xverse-65b', 'xverse-13b', 'xverse-13b-chat', 'seqgpt-560m', 'bluelm-7b', 'bluelm-7b-32k', 'bluelm-7b-chat', 'bluelm-7b-chat-32k', 'internlm-7b', 'internlm-20b', 'grok-1', 'mamba-2.8b', 'mamba-1.4b', 'mamba-790m', 'mamba-390m', 'mamba-370m', 'mamba-130m', 'cogagent-18b-instruct', 'cogagent-18b-chat', 'cogvlm-17b-instruct', 'internlm-7b-chat', 'internlm-7b-chat-8k', 'internlm-20b-chat', 'baichuan-13b', 'baichuan2-13b', 'baichuan2-13b-chat', 'baichuan2-7b', 'baichuan2-7b-chat', 'baichuan2-7b-chat-int4', 'baichuan2-13b-chat-int4', 'codegeex2-6b', 'chatglm2-6b', 'chatglm2-6b-32k', 'chatglm3-6b-base', 'chatglm3-6b', 'chatglm3-6b-32k', 'codefuse-codegeex2-6b-chat', 'dbrx-instruct', 'dbrx-base', 'mixtral-moe-8x22b-v1', 'mixtral-moe-7b-instruct', 'mixtral-moe-7b', 'mistral-7b-v2', 'mistral-7b', 'mistral-7b-instruct-v2', 'mistral-7b-instruct', 'openbuddy-llama2-13b-chat', 'openbuddy-llama-65b-chat', 'openbuddy-llama2-70b-chat', 'openbuddy-mistral-7b-chat', 'openbuddy-mixtral-moe-7b-chat', 'ziya2-13b', 'ziya2-13b-chat', 'yi-6b', 'yi-9b-200k', 'yi-9b', 'yi-6b-200k', 'yi-34b', 'yi-34b-200k', 'yi-34b-chat', 'yi-6b-chat', 'zephyr-7b-beta-chat', 'openbuddy-zephyr-7b-chat', 'sus-34b-chat', 'deepseek-7b', 'deepseek-7b-chat', 'deepseek-67b', 'deepseek-67b-chat', 'openbuddy-deepseek-67b-chat', 'deepseek-coder-33b-instruct', 'deepseek-coder-6_7b-instruct', 'deepseek-coder-1_3b-instruct', 'deepseek-coder-33b', 'deepseek-coder-6_7b', 'deepseek-coder-1_3b', 'qwen1half-moe-a2_7b', 'codeqwen1half-7b', 'qwen1half-72b', 'qwen1half-32b', 'qwen1half-14b', 'qwen1half-7b', 'qwen1half-4b', 'qwen1half-1_8b', 'qwen1half-0_5b', 'deepseek-math-7b', 'deepseek-math-7b-chat', 'deepseek-math-7b-instruct', 'gemma-7b-instruct', 'gemma-2b-instruct', 'gemma-7b', 'gemma-2b', 'mixtral-moe-7b-aqlm-2bit-1x16', 'llama2-7b-aqlm-2bit-1x16', 'codeqwen1half-7b-chat', 'qwen1half-moe-a2_7b-chat', 'qwen1half-72b-chat', 'qwen1half-32b-chat', 'qwen1half-14b-chat', 'qwen1half-7b-chat', 'qwen1half-4b-chat', 'qwen1half-1_8b-chat', 'qwen1half-0_5b-chat', 'codeqwen1half-7b-chat-awq', 'qwen1half-72b-chat-awq', 'qwen1half-32b-chat-awq', 'qwen1half-14b-chat-awq', 'qwen1half-7b-chat-awq', 'qwen1half-4b-chat-awq', 'qwen1half-1_8b-chat-awq', 'qwen1half-0_5b-chat-awq', 'qwen1half-moe-a2_7b-chat-int4', 'qwen1half-72b-chat-int8', 'qwen1half-72b-chat-int4', 'qwen1half-32b-chat-int4', 'qwen1half-14b-chat-int8', 'qwen1half-14b-chat-int4', 'qwen1half-7b-chat-int8', 'qwen1half-7b-chat-int4', 'qwen1half-4b-chat-int8', 'qwen1half-4b-chat-int4', 'qwen1half-1_8b-chat-int8', 'qwen1half-1_8b-chat-int4', 'qwen1half-0_5b-chat-int8', 'qwen1half-0_5b-chat-int4', 'internlm2-20b-base', 'internlm2-20b', 'internlm2-7b-base', 'internlm2-7b', 'internlm2-20b-chat', 'internlm2-20b-sft-chat', 'internlm2-7b-chat', 'internlm2-7b-sft-chat', 'internlm2-math-20b-chat', 'internlm2-math-7b-chat', 'internlm2-math-20b', 'internlm2-math-7b', 'internlm2-1_8b-chat', 'internlm2-1_8b-sft-chat', 'internlm2-1_8b', 'internlm-xcomposer2-7b-chat', 'deepseek-vl-1_3b-chat', 'deepseek-vl-7b-chat', 'llama2-70b-chat', 'llama2-13b-chat', 'llama2-7b-chat', 'llama2-70b', 'llama2-13b', 'llama2-7b', 'polylm-13b', 'qwen-7b', 'qwen-14b', 'tongyi-finance-14b', 'qwen-72b', 'qwen-1_8b', 'codefuse-qwen-14b-chat', 'qwen-7b-chat', 'qwen-14b-chat', 'tongyi-finance-14b-chat', 'qwen-72b-chat', 'qwen-1_8b-chat', 'qwen-vl', 'qwen-vl-chat', 'qwen-audio', 'qwen-audio-chat', 'qwen-7b-chat-int4', 'qwen-14b-chat-int4', 'qwen-7b-chat-int8', 'qwen-14b-chat-int8', 'qwen-vl-chat-int4', 'tongyi-finance-14b-chat-int4', 'qwen-72b-chat-int4', 'qwen-72b-chat-int8', 'qwen-1_8b-chat-int4', 'qwen-1_8b-chat-int8', 'skywork-13b', 'skywork-13b-chat', 'codefuse-codellama-34b-chat', 'telechat-12b', 'phi2-3b', 'telechat-7b', 'minicpm-moe-8x2b', 'deepseek-moe-16b', 'deepseek-moe-16b-chat', 'yuan2-2b-janus-instruct', 'yuan2-102b-instruct', 'yuan2-51b-instruct', 'yuan2-2b-instruct', 'orion-14b-chat', 'orion-14b', 'yi-vl-6b-chat', 'yi-vl-34b-chat', 'minicpm-2b-128k', 'minicpm-1b-sft-chat', 'minicpm-2b-chat', 'minicpm-2b-sft-chat', 'minicpm-v-v2', 'minicpm-v-3b-chat', 'llava1d6-mistral-7b-instruct', 'llava1d6-yi-34b-instruct', 'mplug-owl2d1-chat', 'mplug-owl2-chat']
Jintao-Huang commented 4 months ago

pip install -e . 安装一下

catundchat commented 4 months ago

pip install -e .安装后llama3微调程序可以跑了,为什么我用pip install 'ms-swift[all]' -U不行呢?

catundchat commented 4 months ago

pip install -e .: 这个命令会在当前目录(即 Swift 文件夹)下寻找 setup.py 文件,并以编辑模式安装包。这意味着它会创建一个指向项目源代码的链接,而不是复制源代码到 Python 的 site-packages 目录。因此,对源代码所做的任何修改都会立即生效,无需重新安装。这对于开发和调试非常方便。 pip install 'ms-swift[all]' -U: 这个命令会尝试从 Python 包索引(PyPI)下载并安装名为 ms-swift 的包的最新版本,包括所有可选的依赖项。-U 参数表示如果已经安装了该包的旧版本,将会升级到最新版本。这个命令适用于安装第三方包,而不是你正在开发的本地项目。 如果你在 Swift 文件夹下运行 pip install 'ms-swift[all]' -U,而且 ms-swift 包正是你当前开发的项目,那么这个命令实际上会尝试从 PyPI 安装 ms-swift,而不是使用你本地的源代码。这通常不是你想要的,因为你可能想要测试和运行的是本地代码的最新版本。