meta-llama / llama-recipes

Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama for WhatsApp & Messenger.
15.35k stars 2.22k forks source link

Update wandb.py to accept setting run name from command line argument (e.g., --wandb_config.name "run_name") for fine tuning #772

Open ryf1123 opened 3 weeks ago

ryf1123 commented 3 weeks ago

What does this PR do?

Current wandb logging generate a random run name (despite wandb supporting assigning a run name); I added a missing name attribute to wandb_config, so one could use command line argument like this --wandb_config.name "run_name" to control the displayed run_name on the wandb webpage.

Feature/Issue validation/testing

Before and after the change of this commit, and use argument --wandb_config.name "fake run name". Note that stoic-eon-2 is a random name.

Screenshot 2024-11-04 at 15 43 29

Before submitting