hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
34.33k stars 4.23k forks source link

agent数据集加载报错 #2571

Closed PolarPeak closed 8 months ago

PolarPeak commented 8 months ago

Reminder

Reproduction

因为我机器不出网,所以我将数据集下载后,转换成json格式

{ "conversations": { "0": [ { "from": "human", "loss": null, "value": "Interact with a household to solve a task. Imagine you are an intelligent agent in a household environment and your target is to perform actions to complete the task goal. At the beginning of your interactions, you will be given the detailed description of the current environment and your goal to accomplish. For each of your turn, you will be given a list of actions which you can choose one to perform in this turn. You should choose from two actions: \"THOUGHT\" or \"ACTION\". If you choose \"THOUGHT\", you should first think about the current condition and plan for your future actions, and then output your action in this turn. Your output must strictly follow this format:\"THOUGHT: your thoughts.\n ACTION: your next action\n\"; If you choose \"ACTION\", you should directly output the action in this turn. Your output must strictly follow this format:\"ACTION: your next action\n\". After your each turn, the environment will give you immediate feedback based on which you plan your next few steps. if the envrionment output \"Nothing happened\", that means the previous action is invalid and you should try more options.\n Reminder: \n1. the action must be chosen from the given available actions. Any actions except provided available actions will be regarded as illegal. \n2. Think when necessary, try to act directly more in the process." }, { "from": "gpt", "loss": false, "value": "OK. I'll follow your instructions and try my best to solve the task." }, { "from": "human", "loss": null, "value": "Here is your task. You are in the middle of a room. Looking quickly around you, you see a armchair 2, a armchair 1, a bed 1, a countertop 1, a diningtable 1, a drawer 2, a drawer 1, a dresser 1, a garbagecan 1, a laundryhamper 1, and a sidetable 1.\nYour task is to: find two laptop and put them in bed." }, { "from": "gpt", "loss": true, "value": "THOUGHT: I need to find two laptops and put them in bed. I'll begin by searching the areas that are most likely to have laptops.\n ACTION: go to diningtable 1" }, { "from": "human", "loss": null, "value": "On the diningtable 1, you see a alarmclock 2, a bowl 2, a cd 2, a creditcard 2, a creditcard 1, a laptop 1, a mug 2, a pencil 1, and a remotecontrol 2." }, { "from": "gpt", "loss": true, "value": "ACTION: take laptop 1 from diningtable 1" }, { "from": "human", "loss": null, "value": "You pick up the laptop 1 from the diningtable 1." }, { "from": "gpt", "loss": true, "value": "ACTION: go to bed 1" }, { "from": "human", "loss": null, "value": "On the bed 1, you see a pillow 2, and a pillow 1." }, { "from": "gpt", "loss": true, "value": "ACTION: put laptop 1 in\/on bed 1" }, { "from": "human", "loss": null, "value": "You put the laptop 1 in\/on the bed 1." }, { "from": "gpt", "loss": true, "value": "ACTION: go to countertop 1" }, { "from": "human", "loss": null, "value": "On the countertop 1, you see a alarmclock 3, a bowl 3, a cellphone 3, a creditcard 3, and a mirror 1." }, { "from": "gpt", "loss": true, "value": "ACTION: go to drawer 1" }, { "from": "human", "loss": null, "value": "The drawer 1 is closed." }, { "from": "gpt", "loss": true, "value": "ACTION: open drawer 1" }, { "from": "human", "loss": null, "value": "You open the drawer 1. The drawer 1 is open. In it, you see a pen 2." }, { "from": "gpt", "loss": true, "value": "ACTION: go to drawer 2" }, { "from": "human", "loss": null, "value": "The drawer 2 is closed." }, { "from": "gpt", "loss": true, "value": "ACTION: open drawer 2" }, { "from": "human", "loss": null, "value": "You open the drawer 2. The drawer 2 is open. In it, you see a book 1, and a keychain 2." }, { "from": "gpt", "loss": true, "value": "ACTION: go to dresser 1" }, { "from": "human", "loss": null, "value": "On the dresser 1, you see a bowl 1, a cd 1, a cellphone 1, a keychain 1, a pen 1, a remotecontrol 1, and a television 1." }, { "from": "gpt", "loss": true, "value": "ACTION: go to armchair 1" }, { "from": "human", "loss": null, "value": "On the armchair 1, you see a laptop 2." }, { "from": "gpt", "loss": true, "value": "ACTION: take laptop 2 from armchair 1" }, { "from": "human", "loss": null, "value": "You pick up the laptop 2 from the armchair 1." }, { "from": "gpt", "loss": true, "value": "ACTION: go to bed 1" }, { "from": "human", "loss": null, "value": "On the bed 1, you see a laptop 1, a pillow 2, and a pillow 1." }, { "from": "gpt", "loss": true, "value": "ACTION: put laptop 2 in\/on bed 1" } ], "id": { "0": "alfworld_0", "1": "alfworld_1", } }

我dataset_info.json中写法为 "agent_instruct": { "file_name": "agentsft.json", "file_sha1": "faf556ae4ff6be7d3b808b4c15910bc438d155a2", "columns": { "prompt": "conversation", "query": "role", "response": "content" }, "formatting": "sharegpt" },

训练命令为

deepspeed --num_gpus 1 --master_port=9901 src/train_bash.py \ --deepspeed zero2.json \ --stage sft \ --dataset agent_instruct \ --model_name_or_path /workspace/Mistral-7B-Instruct-v0.2 \ --adapter_name_or_path /workspace/Mistral-7B-Instruct-sft-checkpoint \ --do_train True \ --finetuning_type lora \ --lora_target q_proj,v_proj \ --output_dir /workspace/Mistral-7B-Instruct-agent-sft \ --overwrite_cache \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 2 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 2000 \ --learning_rate 5e-4 \ --num_train_epochs 3.0 \ --plot_loss \ --template mistral \ --flash_attn \ --fp16

加载数据集报错

Converting format of dataset: 0%| | 0/1 [00:00<?, ? examples/s] Traceback (most recent call last): File "/workspace/LLaMA-Factory-main/src/train_bash.py", line 14, in main() File "/workspace/LLaMA-Factory-main/src/train_bash.py", line 5, in main run_exp() File "/workspace/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 31, in run_exp run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) File "/workspace/LLaMA-Factory-main/src/llmtuner/train/sft/workflow.py", line 32, in run_sft dataset = get_dataset(tokenizer, model_args, data_args, training_args, stage="sft") File "/workspace/LLaMA-Factory-main/src/llmtuner/data/loader.py", line 162, in get_dataset all_datasets.append(load_single_dataset(dataset_attr, model_args, data_args)) File "/workspace/LLaMA-Factory-main/src/llmtuner/data/loader.py", line 111, in load_single_dataset return align_dataset(dataset, dataset_attr, data_args) File "/workspace/LLaMA-Factory-main/src/llmtuner/data/aligner.py", line 125, in align_dataset return dataset.map( File "/usr/local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, *kwargs) File "/usr/local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, args, kwargs) File "/usr/local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3093, in map for rank, done, content in Dataset._map_single(dataset_kwargs): File "/usr/local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3470, in _map_single batch = apply_function_on_filtered_inputs( File "/usr/local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3349, in apply_function_on_filtered_inputs processed_inputs = function(fn_args, additional_args, **fn_kwargs) File "/workspace/LLaMA-Factory-main/src/llmtuner/data/aligner.py", line 61, in convert_sharegpt for i, messages in enumerate(examples[dataset_attr.messages]): File "/usr/local/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 270, in getitem value = self.data[key] KeyError: None [2024-02-23 18:49:04,710] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 40717

Expected behavior

No response

System Info

No response

Others

No response

hiyouga commented 8 months ago

格式有问题,参考 https://github.com/hiyouga/LLaMA-Factory/blob/main/data/glaive_toolcall_10k.json

cbnann commented 6 months ago

格式有问题,参考 https://github.com/hiyouga/LLaMA-Factory/blob/main/data/glaive_toolcall_10k.json

你好我想问下glaive_toolcall_10k这个数据集是全部都是function_call数据,还是说有掺杂其他的普通对话数据@hiyouga

hiyouga commented 6 months ago

@cbnann 掺杂了普通对话