jzhang38 / TinyLlama

The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
Apache License 2.0
7.3k stars 425 forks source link

On the visualization of Wandb in fine-tuning #174

Closed mli-tian closed 2 months ago

mli-tian commented 3 months ago

Hello! I found that only two lines of code related to Wandb were used in finetuning code to achieve visualization, but I am not sure if some configuration is needed because I want to visually observe the weight changes in finetuning report_to: str = field( default='none', metadata={"help": "To use wandb or something else for reporting."} )