hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
35.32k stars 4.36k forks source link

ValueError: Output directory already exists and is not empty. Please set overwrite_output_dir. #4612

Closed teddy911405 closed 5 months ago

teddy911405 commented 5 months ago

Reminder

System Info

Platform: ubuntu 24.04 lts Name: llamafactory Version: 0.7.2.dev0 Summary: Easy-to-use LLM fine-tuning framework Home-page: https://github.com/hiyouga/LLaMA-Factory Author: hiyouga Author-email: hiyouga@buaa.edu.cn License: Apache 2.0 License Location: /home/unsloth/anaconda3/envs/llama_factory/lib/python3.10/site-packages Editable project location: /home/unsloth/LLaMA-Factory Requires: accelerate, datasets, einops, fastapi, fire, gradio, matplotlib, packaging, peft, protobuf, pydantic, pyyaml, scipy, sentencepiece, sse-starlette, transformers, trl, uvicorn Required-by:

Reproduction

(llama_factory) unsloth@DESKTOP-ROHA3RV:~/LLaMA-Factory$ llamafactory-cli webui Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch(). 06/28/2024 18:00:13 - WARNING - llamafactory.hparams.parser - We recommend enable upcast_layernorm in quantized training. Traceback (most recent call last): File "/home/unsloth/anaconda3/envs/llama_factory/bin/llamafactory-cli", line 8, in sys.exit(main()) File "/home/unsloth/LLaMA-Factory/src/llamafactory/cli.py", line 65, in main run_exp() File "/home/unsloth/LLaMA-Factory/src/llamafactory/train/tuner.py", line 28, in run_exp model_args, data_args, training_args, finetuning_args, generating_args = get_train_args(args) File "/home/unsloth/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 257, in get_train_args raise ValueError("Output directory already exists and is not empty. Please set overwrite_output_dir.") ValueError: Output directory already exists and is not empty. Please set overwrite_output_dir.

Expected behavior

Hello everyone,

I have been wanting to fine-tune models using LLaMA-Factory. After installing everything, I encountered an error during my first run.

Here’s what happened:

(llama_factory) unsloth@DESKTOP-ROHA3RV:~/LLaMA-Factory$ llamafactory-cli webui Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch(). 06/28/2024 18:00:13 - WARNING - llamafactory.hparams.parser - We recommend enable upcast_layernorm in quantized training. Traceback (most recent call last): File "/home/unsloth/anaconda3/envs/llama_factory/bin/llamafactory-cli", line 8, in sys.exit(main()) File "/home/unsloth/LLaMA-Factory/src/llamafactory/cli.py", line 65, in main run_exp() File "/home/unsloth/LLaMA-Factory/src/llamafactory/train/tuner.py", line 28, in run_exp model_args, data_args, training_args, finetuning_args, generating_args = get_train_args(args) File "/home/unsloth/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 257, in get_train_args raise ValueError("Output directory already exists and is not empty. Please set overwrite_output_dir.") ValueError: Output directory already exists and is not empty. Please set overwrite_output_dir.

Could anyone please advise on how to resolve this issue?

Thank you!

Others

I have been searching for discussion groups about LLM fine-tuning but haven’t been able to find any.

Does anyone have any recommendations for Discord servers, Telegram groups, Facebook groups, or LINE groups? Both English and Chinese are fine.

I am unable to use WeChat and WhatsApp because I have muscular dystrophy. I type slowly, one character at a time, and I don't have the strength to use a smartphone, which is necessary for both WeChat and WhatsApp.

hiyouga commented 5 months ago

Ensure the output dir displayed in Llamaboard is empty or non-existent. You can refresh the page to get a new output dir