hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
34.89k stars 4.31k forks source link

json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) #3639

Closed Katehuuh closed 6 months ago

Katehuuh commented 6 months ago

Reminder

Reproduction

Error seems to show on any (dpo) dataset, not just dpo_mix_en.

Full Log. Clean install d9cdddd
```cmd (venv) C:\LLaMA-Factory>set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage orpo --do_train True --model_name_or_path C:\LLaMA-Factory\checkpoints\Llama-2-13b-chat-hf --adapter_name_or_path saves\LLaMA2-13B-Chat\lora\SDprompt_ext --finetuning_type lora --quantization_bit 4 --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset dpo_mix_en --cutoff_len 4096 --learning_rate 5e-05 --num_train_epochs 1.0 --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 5 --save_steps 1000 --warmup_steps 1000 --optim adamw_torch --output_dir saves\LLaMA2-13B-Chat\lora\SDprompt_ext_orpo --bf16 True --lora_rank 32 --lora_dropout 0.15 --lora_target all --plot_loss True bin C:\LLaMA-Factory\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda121.dll [2024-05-08 13:38:57,911] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) W0508 13:38:58.160000 11952 torch\distributed\elastic\multiprocessing\redirects.py:27] NOTE: Redirects are currently not supported in Windows or MacOs. 05/08/2024 13:38:58 - WARNING - llmtuner.hparams.parser - We recommend enable `upcast_layernorm` in quantized training. 05/08/2024 13:38:58 - INFO - llmtuner.hparams.parser - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: False, compute dtype: torch.bfloat16 [INFO|tokenization_utils_base.py:2085] 2024-05-08 13:38:58,917 >> loading file tokenizer.model [INFO|tokenization_utils_base.py:2085] 2024-05-08 13:38:58,917 >> loading file tokenizer.json [INFO|tokenization_utils_base.py:2085] 2024-05-08 13:38:58,917 >> loading file added_tokens.json [INFO|tokenization_utils_base.py:2085] 2024-05-08 13:38:58,918 >> loading file special_tokens_map.json [INFO|tokenization_utils_base.py:2085] 2024-05-08 13:38:58,918 >> loading file tokenizer_config.json 05/08/2024 13:38:58 - INFO - llmtuner.data.template - Add pad token: 05/08/2024 13:38:58 - INFO - llmtuner.data.loader - Loading dataset hiyouga/DPO-En-Zh-20k... Downloading readme: 100%|█████████████████████████████████████████████████████████████████| 1.63k/1.63k [00:00 ### Instruction: continue ### Response: chosen_ids: [1094, 830, 23367, 4649, 276, 1446, 25088, 29892, 591, 3814, 304, 7985, 1749, 3815, 411, 5684, 1741, 6615, 4097, 322, 9999, 292, 4266, 2879, 29889, 1763, 24803, 403, 445, 14321, 29892, 591, 674, 10127, 385, 8034, 2913, 393, 9926, 414, 24771, 322, 907, 28157, 4249, 3815, 5144, 29889, 1334, 674, 884, 7536, 277, 675, 7592, 664, 3987, 322, 4840, 21354, 12084, 8492, 304, 2304, 7592, 3815, 5144, 29889, 13, 13, 797, 6124, 29892, 591, 12242, 304, 6894, 1598, 1749, 22162, 271, 5957, 886, 304, 274, 1008, 304, 1422, 963, 267, 29892, 1316, 408, 6651, 9850, 22162, 1446, 29892, 1532, 2264, 22162, 1446, 363, 11825, 29892, 322, 20954, 8694, 484, 952, 29889, 1334, 674, 16508, 304, 18096, 411, 2999, 6003, 1041, 4822, 5164, 14354, 3186, 8157, 304, 3867, 263, 16984, 3464, 310, 22162, 271, 27482, 29889, 13, 13, 4806, 674, 884, 7985, 1749, 9999, 292, 14231, 304, 6159, 263, 25734, 20026, 29892, 5256, 5281, 14982, 14060, 545, 1549, 13787, 2265, 22056, 14587, 29892, 2902, 993, 616, 9999, 292, 29892, 322, 6901, 4959, 29889, 1334, 674, 3133, 5794, 380, 4401, 304, 2143, 457, 1749, 5786, 2729, 373, 3132, 16705, 322, 12439, 13661, 534, 1975, 304, 7744, 1749, 5957, 886, 304, 278, 3926, 29899, 305, 9776, 4225, 322, 5821, 2063, 310, 1749, 3646, 9999, 29889, 13, 13, 2887, 591, 6287, 29892, 591, 674, 3933, 19355, 304, 318, 561, 1025, 292, 1749, 10655, 304, 3710, 1680, 322, 4511, 5866, 1549, 4327, 1230, 22162, 271, 27482, 393, 7536, 277, 675, 7333, 14321, 29892, 1532, 2264, 29892, 322, 3957, 29889, 2] chosen: As Recharge Retreats grows, we plan to expand our team with additional event coordinators and marketing specialists. To accommodate this growth, we will establish an office space that fosters collaboration and creativity among team members. We will also prioritize remote work options and streamlined communication tools to support remote team members. In addition, we aim to diversify our retreat offerings to cater to different themes, such as solo travel retreats, wellness retreats for parents, and spiritual journeys. We will seek to partner with multiple venues across various locations worldwide to provide a diverse range of retreat experiences. We will also expand our marketing efforts to reach a wider audience, maximizing brand exposure through influencer partnerships, experiential marketing, and virtual events. We will continuously strive to refine our services based on client feedback and identify industry trends to adapt our offerings to the ever-changing needs and preferences of our target market. As we scale, we will remain committed to upholding our mission to empower and connect women through transformative retreat experiences that prioritize personal growth, wellness, and connection. rejected_ids: [319, 29889, 317, 1052, 292, 29901, 13, 13, 29896, 29889, 4587, 571, 263, 23272, 895, 1904, 29901, 29408, 4045, 304, 1722, 322, 21994, 830, 23367, 4649, 276, 1446, 23272, 4637, 297, 1422, 12786, 29892, 7985, 292, 1749, 6159, 322, 337, 9947, 7037, 29889, 13, 29906, 29889, 10682, 7395, 21888, 322, 1302, 9733, 11104, 29901, 6204, 1462, 944, 653, 17869, 20873, 491, 27032, 6694, 322, 2304, 304, 15724, 322, 25700, 8852, 297, 2894, 5281, 1009, 1914, 22162, 1446, 29889, 13, 13, 29933, 29889, 360, 1536, 2450, 29901, 13, 13, 29896, 29889, 12027, 392, 2669, 5957, 886, 29901, 10056, 4417, 5684, 22162, 271, 29899, 12817, 5786, 29892, 1316, 408, 1302, 9733, 29892, 373, 17696, 2304, 29892, 22162, 271, 17983, 29892, 470, 22162, 271, 11782, 3277, 3987, 29889, 13, 29906, 29889, 17157, 716, 2791, 1691, 29901, 4358, 4858, 292, 1749, 3646, 9999, 304, 3160, 916, 1261, 1946, 1199, 29892, 1316, 408, 1757, 29892, 13175, 29892, 470, 2702, 4066, 6471, 29889, 13, 13, 29907, 29889, 4623, 12027, 9454, 29901, 13, 13, 29896, 29889, 3455, 1089, 411, 1887, 25700, 29901, 13435, 3717, 403, 411, 5866, 29915, 29879, 25700, 28177, 304, 3814, 6121, 22162, 1446, 322, 7985, 1749, 5534, 10122, 29889, 13, 29906, 29889, 922, 1416, 5220, 292, 28602, 1907, 29901, 349, 1295, 434, 867, 1934, 29892, 13258, 1860, 29892, 322, 658, 550, 304, 1436, 749, 6121, 22162, 1446, 322, 2304, 13184, 297, 716, 2791, 1691, 29889, 13, 13, 2059, 1494, 445, 5381, 3814, 29892, 830, 23367, 4649, 276, 1446, 756, 278, 7037, 304, 4953, 263, 8236, 13113, 310, 5866, 29915, 29879, 22162, 271, 5786, 29892, 3710, 1680, 292, 15724, 322, 25700, 304, 9926, 261, 14321, 29892, 3957, 29892, 322, 7333, 13852, 1549, 26959, 519, 322, 2834, 29899, 264, 5403, 3277, 27482, 29889, 2] rejected: A. Scaling: 1. Offer a franchise model: Allow others to open and operate Recharge Retreats franchises in different regions, expanding our reach and revenue potential. 2. Develop online courses and coaching programs: Create supplementary income streams by offering training and support to individuals and organizations interested in organizing their own retreats. B. Diversification: 1. Expand service offerings: Consider adding additional retreat-related services, such as coaching, ongoing support, retreat evaluation, or retreat financing options. 2. Target new markets: Broadening our target market to include other demographics, such as men, families, or specific interest groups. C. International Expansion: 1. Partner with local organizations: Collaborate with women's organizations abroad to plan international retreats and expand our global presence. 2. Seek funding opportunities: Pursue grants, investments, and loans to finance international retreats and support expansion in new markets. By following this business plan, Recharge Retreats has the potential to become a leading provider of women's retreat services, empowering individuals and organizations to foster growth, connection, and personal transformation through memorable and life-enhancing experiences. [INFO|configuration_utils.py:724] 2024-05-08 13:39:31,702 >> loading configuration file C:\LLaMA-Factory\checkpoints\Llama-2-13b-chat-hf\config.json [INFO|configuration_utils.py:789] 2024-05-08 13:39:31,703 >> Model config LlamaConfig { "_name_or_path": "C:\\LLaMA-Factory\\checkpoints\\Llama-2-13b-chat-hf", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 13824, "max_position_embeddings": 4096, "model_type": "llama", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 40, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 32000 } 05/08/2024 13:39:31 - WARNING - llmtuner.model.utils.rope - Input length is smaller than max length. Consider increase input length. 05/08/2024 13:39:31 - INFO - llmtuner.model.utils.rope - Using linear scaling strategy and setting scaling factor to 1.0 05/08/2024 13:39:31 - INFO - llmtuner.model.utils.quantization - Quantizing model to 4 bit. [INFO|modeling_utils.py:3426] 2024-05-08 13:39:31,735 >> loading weights file C:\LLaMA-Factory\checkpoints\Llama-2-13b-chat-hf\model.safetensors.index.json [INFO|modeling_utils.py:1494] 2024-05-08 13:39:31,747 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16. [INFO|configuration_utils.py:928] 2024-05-08 13:39:31,750 >> Generate config GenerationConfig { "bos_token_id": 1, "eos_token_id": 2 } Loading checkpoint shards: 100%|████████████████████████████████████████████████████████| 3/3 [13:03<00:00, 261.10s/it] [INFO|modeling_utils.py:4170] 2024-05-08 13:52:35,427 >> All model checkpoint weights were used when initializing LlamaForCausalLM. [INFO|modeling_utils.py:4178] 2024-05-08 13:52:35,427 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at C:\LLaMA-Factory\checkpoints\Llama-2-13b-chat-hf. If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. [INFO|modeling_utils.py:3719] 2024-05-08 13:52:35,430 >> Generation config file not found, using a generation config created from the model config. [WARNING|quantizer_bnb_4bit.py:307] 2024-05-08 13:52:35,823 >> You are calling `save_pretrained` to a 4-bit converted model, but your `bitsandbytes` version doesn't support it. If you want to save 4-bit models, make sure to have `bitsandbytes>=0.41.3` installed. 05/08/2024 13:52:35 - INFO - llmtuner.model.utils.checkpointing - Gradient checkpointing enabled. 05/08/2024 13:52:35 - INFO - llmtuner.model.utils.attention - Using FlashAttention-2 for faster training and inference. 05/08/2024 13:52:35 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA Traceback (most recent call last): File "C:\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\LLaMA-Factory\venv\Scripts\llamafactory-cli.exe\__main__.py", line 7, in sys.exit(main()) File "C:\LLaMA-Factory\venv\lib\site-packages\llmtuner\cli.py", line 49, in main run_exp() File "C:\LLaMA-Factory\venv\lib\site-packages\llmtuner\train\tuner.py", line 41, in run_exp run_orpo(model_args, data_args, training_args, finetuning_args, callbacks) File "C:\LLaMA-Factory\venv\lib\site-packages\llmtuner\train\orpo\workflow.py", line 30, in run_orpo model = load_model(tokenizer, model_args, finetuning_args, training_args.do_train) File "C:\LLaMA-Factory\venv\lib\site-packages\llmtuner\model\loader.py", line 137, in load_model model = init_adapter(config, model, model_args, finetuning_args, is_trainable) File "C:\LLaMA-Factory\venv\lib\site-packages\llmtuner\model\adapter.py", line 137, in init_adapter model = PeftModel.from_pretrained( File "C:\LLaMA-Factory\venv\lib\site-packages\peft\peft_model.py", line 328, in from_pretrained PeftConfig._get_peft_type( File "C:\LLaMA-Factory\venv\lib\site-packages\peft\config.py", line 205, in _get_peft_type loaded_attributes = cls.from_json_file(config_file) File "C:\LLaMA-Factory\venv\lib\site-packages\peft\config.py", line 163, in from_json_file json_object = json.load(file) File "C:\Python\Python310\lib\json\__init__.py", line 293, in load return loads(fp.read(), File "C:\Python\Python310\lib\json\__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Python\Python310\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Python\Python310\lib\json\decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) ```

Expected behavior

No response

System Info

No response

Others

No response

Katehuuh commented 6 months ago

No dataset issue Running tokenizer on dataset is successfully. But the resume LoRA --adapter_name_or_path saves\LLaMA2-13B-Chat\lora\SDprompt_ext is the issue, as adapter_config.json is empty.