EricFillion / happy-transformer

Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.
http://happytransformer.com
Apache License 2.0
517 stars 66 forks source link

Get the loss per epoch instead of only 500 and 1000 epochs #319

Closed JVincentG closed 1 year ago

JVincentG commented 1 year ago

Is there any way i can get the loss per step of epoch so i can generate a line chart of loss per epoch? thanks

turinaf commented 1 year ago

I am looking for this too. Please let us know if there's a way can get training loss per epoch so that we can visualize. Same for evaluaiton loss. @EricFillion

Thank you in advance.

EricFillion commented 1 year ago

Thanks for the feedback. This feature shouldn't be too hard to add. We just need to allow the user to adjust the Hugging Face TrainingArguments's eval_steps parameter and possibly also the evaluation_strategy parameter as well.

I'll look into it for the next update.

JVincentG commented 1 year ago

I just wanted to follow up on this feature. If you could guide me, @EricFillion, on where to look to update the HappyTransformer model, I would be willing to volunteer and create a pull request to close this issue. Thank you.

EricFillion commented 1 year ago

This feature was added in version 3.0.0.

You can adjust how many times training occurs using the eval_steps argument for your TrainArgs class such as GENTrainArgs's _evalsteps parameter. Provide a float between 0 and 1 to specify the ratio of steps until evaluating occurs. By default it is 0.1 so evaluating occurs 10 times.

from happytransformer import GENTrainArgs

args = GENTrainArgs(eval_steps=0.05)