Open jhangmez opened 12 hours ago
I do more feedback when I train and cancel and then do it again
@jhangmez Coincidentally I just fixed it :)
I updated all training notebooks - please edit the TrainingArguments
part by adding report_to = "none"
. For example:
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
...
),
should be edited to:
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
...
report_to = "none", # Use this for WandB etc
),
I've been training a modelo with llama 3.2 1b and this happened at this time, I was training 4 hours ago and this didn't happen. I tried to stop it and run again but then it doesn't work because I get this Error: wandb.init() before wandb.log()
I was traning this early morning and it doesn't show, but now do.
Is this a bug or what?