foundation-model-stack / fms-hf-tuning

🚀 Collection of tuning recipes with HuggingFace SFTTrainer and PyTorch FSDP.
Apache License 2.0
28 stars 48 forks source link

feat: Silence certain warnings #316

Closed willmj closed 2 months ago

willmj commented 3 months ago

Description of the change

This PR is to silence certain warnings, preliminarily silencing future warnings if the new Training Arguments flag warnings is set to False (which it is by default). Imported warnings into sft_trainer.py Changed the maximum number of class parameters to 8 from 7 in .pylintrc.

With this preliminary change (only silencing FutureWarnings) when running tox -e py: Before: 153 passed, 14 skipped, 249 warnings in 76.34s (0:01:16) After: 153 passed, 14 skipped, 207 warnings in 69.31s (0:01:09)

Questions for reviewers

How to verify the PR

In root of fms-hf-tuning

python3 tuning/sft_trainer.py \                                        
--model_name_or_path Maykeye/TinyLLama-v0 \
--training_data_path tests/data/twitter_complaints_small.jsonl \
--output_dir outputs/lora-tuning \
--num_train_epochs 5 \
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 4 \
--learning_rate 1e-5 \
--response_template "\n### Label:" \
--dataset_text_field "output" \
--use_flash_attn false \
--torch_dtype "float32" \
--peft_method "lora" \
--r 8 \
--lora_dropout 0.05 \
--lora_alpha 16  \
--warnings True

OR

--warnings False

You can also run tox -e py before and after changes.

Was the PR tested