OptimalScale / LMFlow

An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
https://optimalscale.github.io/LMFlow/
Apache License 2.0
8.22k stars 820 forks source link

How to use pycharm single-step debugging . Tips: run remote in server #475

Closed huizhilei closed 1 year ago

huizhilei commented 1 year ago

I edit the configurations of finetune.py in pycharm as below:

--model_name_or_path facebook/galactica-1.3b --dataset_path /root/LMFlow/data/alpaca/train --output_dir /root/LMFlow/output_models/finetune_with_lora --overwrite_output_dir --num_train_epochs 0.01 --learning_rate 1e-4 --block_size 512 --per_device_train_batch_size 1 --use_lora 1 --lora_r 8 --save_aggregated_lora 0 --deepspeed /root/LMFlow/configs/ds_config_zero2.json --bf16 --run_name finetune_with_lora --validation_split_percentage 0 --logging_steps 20 --do_train --ddp_timeout 72000 --save_steps 5000 --dataloader_num_workers 4 image

When i run finetune.py, it works;; BUT when i debug the finetune.py ,it show below error: image ImportError: cannot import name 'AutoPipeline' from partially initialized module 'lmflow.pipeline.auto_pipeline' (most likely due to a circular import) (/root/LMFlow/src/lmflow/pipeline/auto_pipeline.py)

HuihuiChyan commented 1 year ago

把evaluate.py这个文件改成evaluation.py,然后在对应的脚本里也改一下就行。 现在出现这个BUG是因为evaluate.py这个文件和包evaluate混淆了。

huizhilei commented 1 year ago

把evaluate.py这个文件改成evaluation.py,然后在对应的脚本里也改一下就行。 现在出现这个BUG是因为evaluate.py这个文件和包evaluate混淆了。

好嘞,谢谢哥! 哥,再请教个问题,跑测试脚本的时候,在pycharm里面参数怎么配置py文件啊? image 这个CUDA_VISIBLE_DEVICES=0参数在pycharm中怎么配置呀?或者需要配置吗

HuihuiChyan commented 1 year ago

建议改用vscode

shizhediao commented 1 year ago

This issue has been marked as stale because it has not had recent activity. If you think this still needs to be addressed please feel free to reopen this issue. Thanks