Closed themichael323 closed 1 year ago
If your model generates floating-point exception errors, you can consider specifying values for gradient clipping parameters (clip_grad_min and clip_grad_max). https://github.com/sassoftware/python-dlpy/blob/af4874e00edc7a4b7c31646e76057a76d566481c/dlpy/tests/test_embedding_model.py
Hi,
I'm working at Sequnce Labaling model for NLP task (dlpy.applications.SequenceLabeling) and while I was trying to fit the data, I came across an error "ERROR: A floating-point overflow exception occurred, halting the analysis. This condition is usually caused by improperly scaled inputs, a large learning rate, or exploding gradients.".
This is my code:
Input consists of ten columns, there is one word in each column in each row, also the labels(varchar type) are represented in the same way.
I tried to run it with diffrent learning rate, but every time I get the same error, any ideas how to fix that?