hiyouga / LLaMA-Factory

Unify Efficient Fine-Tuning of 100+ LLMs
Apache License 2.0
25.52k stars 3.16k forks source link

windows上start直接Fail并出现llamafactory-cli乱码 #4625

Closed rizi960 closed 2 days ago

rizi960 commented 2 days ago

Reminder

System Info

[Running] set PYTHONIOENCODING=utf8 & C:\Users\administered.conda\envs\llamafactory\python.exe -u "g:\Study\AI\LLaMA-Factory-2\src\webui.py" [2024-06-30 23:14:21,072] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2024-06-30 23:14:21,270] torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs. ERROR: [Errno 10048] error while attempting to bind on address ('0.0.0.0', 7860): 通常每个套接字地址(协议/网络地址/端口)只允许使用一次。 Running on local URL: http://0.0.0.0:7861

To create a public link, set share=True in launch(). 'llamafactory-cli' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ���

Reproduction

llamafactory-cli train \ --stage sft \ --do_train True \ --model_name_or_path G:\Study\AI\Models\Qwen1.5-0.5B-Chat \ --preprocessing_num_workers 16 \ --finetuning_type lora \ --template qwen \ --flash_attn auto \ --dataset_dir data \ --dataset identity \ --cutoff_len 1024 \ --learning_rate 0.0003 \ --num_train_epochs 3.0 \ --max_samples 5000 \ --per_device_train_batch_size 4 \ --gradient_accumulation_steps 4 \ --lr_scheduler_type cosine \ --max_grad_norm 1.0 \ --logging_steps 5 \ --save_steps 100 \ --warmup_steps 0 \ --optim adamw_torch \ --packing False \ --report_to none \ --output_dir saves\Qwen1.5-0.5B-Chat\lora\train_2024-06-30-23-14-34 \ --fp16 True \ --plot_loss True \ --ddp_timeout 180000000 \ --include_num_input_tokens_seen True \ --lora_rank 8 \ --lora_alpha 16 \ --lora_dropout 0 \ --use_dora True \ --lora_target all

Expected behavior

windows上运行出现cli乱码,实际已安装llamafactory-cli并能查到版本

Others

llamafactory-cli version [2024-06-30 23:30:07,373] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2024-06-30 23:30:07,567] torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs.

Welcome to LLaMA Factory, version 0.8.3.dev0
Project page: https://github.com/hiyouga/LLaMA-Factory

hiyouga commented 2 days ago

修改终端编码为 utf-8

rizi960 commented 2 days ago

修改终端编码为 utf-8

一直都是utf-8 llamafactory-乱码