Open AtillaKaanAlkan opened 11 months ago
I can see that your loss is nan
so I assume this is the reason.. can you debug to see why? which encoder did you use?
Hi @shon-otmazgin ,
First of all, thanks for your answer!
I have debug and found the reason of the problem by reducing the eval_steps
parameter value. Its default value was too high, maybe because my corpus is small... So, after reducing it to 100 it works.
By the way, I tried to fine-tune a model with some other hyperparameters: e.g. when I am reducing the size of max_tokens_in_batch
(e.g. to 2500 or other value) I receive an error message. For the moment, I am obliged to keep it at its default value (5000).
I will try to figure it out this in the next days, I have to work on other projects at the moment. I can write you if I am not able to solve it :-)
Best, Atilla
Hi @shon-otmazgin ,
I fine-tuned lingmess on my own corpus on 50 epochs. On the last epoch I print the results, which are the following: As you can see all scores are null (even if I give the same training, dev and test sets to the system they remain null). I don't understand what is going wrong? Thanks for helping! Atilla