hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
32.09k stars 3.93k forks source link

使用readme中提供的hugging face预训练数据集报错 #4708

Closed xiao-liya closed 3 months ago

xiao-liya commented 3 months ago

Reminder

System Info

[2024-07-07 21:21:10,241] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)

Reproduction

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6 deepspeed --num_gpus=7 --master_port=9901 src/train.py \ --deepspeed ds_config.json \ --stage pt \ --do_train True \ --model_name_or_path /home/user/proj/Qwen2-7B-Instruct \ --finetuning_type full \ --template qwen \ --flash_attn auto \ --dataset_dir data \ --dataset ophthalmology,pt_diagnose,wikipedia_zh,skypile \ --cutoff_len 4096 \ --learning_rate 5e-05 \ --num_train_epochs 3.0 \ --max_samples 300000 \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 4 \ --lr_scheduler_type cosine \ --max_grad_norm 1.0 \ --logging_steps 5 \ --save_steps 3000 \ --warmup_steps 0 \ --optim adamw_torch \ --packing True \ --report_to none \ --output_dir saves/Custom/full/train_Qwen2-7B-instruct_book_shoucheng_add_full_pt1 \ --fp16 True \ --plot_loss True

我的data_info.json格式如下: "wikipedia_zh": { "hf_hub_url": "pleisto/wikipedia-cn-20230720-filtered", "ms_hub_url": "AI-ModelScope/wikipedia-cn-20230720-filtered", "columns": { "prompt": "completion" } }, "skypile": { "hf_hub_url": "Skywork/SkyPile-150B", "ms_hub_url": "AI-ModelScope/SkyPile-150B", "columns": { "prompt": "text" } }, 报错如下: rank0: Traceback (most recent call last): rank0: File "/home/user/proj/LLaMA-Factory/src/train.py", line 28, in

rank0: File "/home/user/proj/LLaMA-Factory/src/train.py", line 19, in main

rank0: File "/home/user/proj/LLaMA-Factory/src/llamafactory/train/tuner.py", line 48, in run_exp rank0: run_pt(model_args, data_args, training_args, finetuning_args, callbacks) rank0: File "/home/user/proj/LLaMA-Factory/src/llamafactory/train/pt/workflow.py", line 45, in run_pt rank0: dataset = get_dataset(model_args, data_args, training_args, stage="pt", **tokenizer_module) rank0: File "/home/user/proj/LLaMA-Factory/src/llamafactory/data/loader.py", line 174, in get_dataset rank0: all_datasets.append(load_single_dataset(dataset_attr, model_args, data_args, training_args)) rank0: File "/home/user/proj/LLaMA-Factory/src/llamafactory/data/loader.py", line 89, in load_single_dataset rank0: dataset = MsDataset.load( rank0: File "/home/user/.conda/envs/llamafactory/lib/python3.10/site-packages/modelscope/msdatasets/ms_dataset.py", line 316, in load rank0: dataset_inst = remote_dataloader_manager.load_dataset( rank0: File "/home/user/.conda/envs/llamafactory/lib/python3.10/site-packages/modelscope/msdatasets/data_loader/data_loader_manager.py", line 132, in load_dataset

rank0: File "/home/user/.conda/envs/llamafactory/lib/python3.10/site-packages/modelscope/msdatasets/data_loader/data_loader.py", line 82, in process

rank0: File "/home/user/.conda/envs/llamafactory/lib/python3.10/site-packages/modelscope/msdatasets/data_loader/data_loader.py", line 109, in _build

rank0: File "/home/user/.conda/envs/llamafactory/lib/python3.10/site-packages/modelscope/msdatasets/meta/data_meta_manager.py", line 138, in parse_dataset_structure rank0: target_subset_name, target_dataset_structure = get_target_dataset_structure( rank0: File "/home/user/.conda/envs/llamafactory/lib/python3.10/site-packages/modelscope/msdatasets/utils/dataset_utils.py", line 74, in get_target_dataset_structure rank0: raise ValueError( rank0: ValueError: split train not found. Available: dict_keys([]) 07/07/2024 21:19:18 - INFO - llamafactory.data.loader - Loading dataset book.txt... 07/07/2024 21:19:18 - INFO - llamafactory.data.loader - Loading dataset book.txt... 07/07/2024 21:19:18 - INFO - llamafactory.data.loader - Loading dataset book.txt... 07/07/2024 21:19:19 - INFO - llamafactory.data.loader - Loading dataset pt_dataset.txt... 07/07/2024 21:19:19 - INFO - llamafactory.data.loader - Loading dataset pt_dataset.txt... 07/07/2024 21:19:19 - INFO - llamafactory.data.loader - Loading dataset pt_dataset.txt... 07/07/2024 21:19:19 - INFO - llamafactory.data.loader - Loading dataset pt_dataset.txt... 07/07/2024 21:19:19 - INFO - llamafactory.data.loader - Loading dataset pt_dataset.txt... 07/07/2024 21:19:19 - INFO - llamafactory.data.loader - Loading dataset pt_dataset.txt... 07/07/2024 21:19:19 - INFO - llamafactory.data.loader - Loading dataset AI-ModelScope/wikipedia-cn-20230720-filtered... 07/07/2024 21:19:19 - INFO - llamafactory.data.loader - Loading dataset AI-ModelScope/wikipedia-cn-20230720-filtered... 07/07/2024 21:19:20 - INFO - llamafactory.data.loader - Loading dataset AI-ModelScope/wikipedia-cn-20230720-filtered... 07/07/2024 21:19:20 - INFO - llamafactory.data.loader - Loading dataset AI-ModelScope/wikipedia-cn-20230720-filtered... 07/07/2024 21:19:20 - INFO - llamafactory.data.loader - Loading dataset AI-ModelScope/wikipedia-cn-20230720-filtered... 07/07/2024 21:19:20 - INFO - llamafactory.data.loader - Loading dataset AI-ModelScope/wikipedia-cn-20230720-filtered... [2024-07-07 21:19:21,063] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2937496 [2024-07-07 21:19:21,063] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2937497 [2024-07-07 21:19:21,495] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2937498 [2024-07-07 21:19:21,960] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2937499 [2024-07-07 21:19:22,512] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2937500 [2024-07-07 21:19:23,017] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2937501 [2024-07-07 21:19:23,559] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2937502 [2024-07-07 21:19:24,010] [ERROR] [launch.py:322:sigkill_handler] ['/home/user/.conda/envs/llamafactory/bin/python', '-u', 'src/train.py', '--local_rank=6', '--deepspeed', 'ds_config.json', '--stage', 'pt', '--do_train', 'True', '--model_name_or_path', '/home/user/proj/Qwen2-7B-Instruct', '--finetuning_type', 'full', '--template', 'qwen', '--flash_attn', 'auto', '--dataset_dir', 'data', '--dataset', 'ophthalmology,pt_diagnose,wikipedia_zh,skypile', '--cutoff_len', '4096', '--learning_rate', '5e-05', '--num_train_epochs', '3.0', '--max_samples', '300000', '--per_device_train_batch_size', '2', '--gradient_accumulation_steps', '4', '--lr_scheduler_type', 'cosine', '--max_grad_norm', '1.0', '--logging_steps', '5', '--save_steps', '3000', '--warmup_steps', '0', '--optim', 'adamw_torch', '--packing', 'True', '--report_to', 'none', '--output_dir', 'saves/Custom/full/train_Qwen2-7B-instruct_book_shoucheng_add_full_pt1', '--fp16', 'True', '--plot_loss', 'True'] exits with return code = 1

请问是什么原因?

Expected behavior

进行正常的增量预训练

Others

No response

hiyouga commented 3 months ago

使用 HuggingFace 数据集而非 modelscope

xiao-liya commented 3 months ago

不好意思,我每看懂,我用的数据集就是dataset_info中定义的数据集,我查了一下huggingface,有这两个数据集, dataset_info中的定义如下: "wikipedia_zh": { "hf_hub_url": "pleisto/wikipedia-cn-20230720-filtered", "ms_hub_url": "AI-ModelScope/wikipedia-cn-20230720-filtered", "columns": { "prompt": "completion" } }, "skypile": { "hf_hub_url": "Skywork/SkyPile-150B", "ms_hub_url": "AI-ModelScope/SkyPile-150B", "columns": { "prompt": "text" } },