Open Tizzzzy opened 2 weeks ago
Hi! python -m
will run the installed module and you have to pip install to update llama-recipe module. Alternatively you can just run the python file. In the main llama_recipes folder, please run python recipes/finetuning/finetuning.py --use_peft --peft_method lora --quantization --model_name meta-llama/Meta-Llama-3-8B --dataset "openbookqa_dataset" --custom_dataset.file "src/llama_recipes/datasets/openbookqa_dataset.py" --batching_strategy "packing"
instead of running the src/llama_recipes/finetuning.py
. Let me know if there is any problem.
System Info
All other packages are the same as the
requirments.txt
My local machine has 32G RAM. My gpu information:
NVIDIA-SMI 550.54.15
;Driver Version: 545.84
;CUDA Version: 12.3
;NVIDIA GeForce RTX 3070
;8192MiB
Information
🐛 Describe the bug
I am new to llama_recipes, and I am trying to finetune llama3 on a new dataset. Following the finetune tutorial, I use the command
python -m llama_recipes.finetuning --use_peft --peft_method lora --quantization --model_name meta-llama/Meta-Llama-3-8B --dataset "openbookqa_dataset" --custom_dataset.file "datasets/openbookqa_dataset.py" --batching_strategy "packing"
to finetune llama3. This command successfully compiled. But the problem is, every time I make a change to the code on my local machine, I need to call this command firstgit checkout main && git pull && pip install -U .
.Then I see in singlegpu_finetuning page, there is a new command:
python -m finetuning.py --use_peft --peft_method lora --quantization --model_name meta-llama/Meta-Llama-3-8B --dataset "openbookqa_dataset" --custom_dataset.file "datasets/openbookqa_dataset.py" --batching_strategy "packing"
. I am assuming this command will run my local finetuning python file in this path~\llama-recipes\src\llama_recipes\finetuning.py
. However, this command gives me error.Error logs
Expected behavior
I expect both commands will work