Closed zsozso21 closed 2 years ago
Thanks for the report!
The first error occurs because CPUs don't support mixed-precision. You could set both instances of
mixed_precision = true
to
mixed_precision = false
in the configuration. Admittedly, this is not very convenient, I'll look into disabling mixed-precision altogether when running on CPU, that would probably be nicer than the current assertion.
I have to look into the second error in more detail, though it seems that the trace is not completely pasted?
Fair warning ahead: the accuracy of the biaffine parser is not great yet with a convolutional tok2vec layer. I am currently also working on a set of changes that also improve accuracy when training a transformer model quite a bit.
Closing as the first issue was addressed with https://github.com/explosion/thinc/pull/624. If you still run into issues with the second error, feel free to open a new issue with the full stack trace!
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
I applied the experimental Biaffine parser based on this example and it works well when I use the transformer based architecture, but I got the following error when I tried to apply it with a toc2vec model by using cpu:
And I got this error with GPU:
How to reproduce the behaviour
I used the following config:
Your Environment