Open PonteIneptique opened 4 years ago
I am bumping this one :)
Sorry I wont have time for this in the coming months, busy PhD time.
I get it :)
https://github.com/emanjavacas/pie/pull/74#pullrequestreview-546114313 I don't see how you would like to test this. I mean, I ran things (which is not test per se, in the sense of CI/Unit Test) and it was the same. But to test it, we'd have to keep an artificial version of predict_max ?
Thanks for the review. I'll try to implement what you ask for, because I still think that's a huge improvement.
I have moved from the loop to tensor uses as required. Remains the question of tests which I am not sure how to handle.
Bumping this :)
…ched EOS for prediction (See #73)
The current prediction time is quite slow, we agree that there might be room for improvement.
After having a good look at it, it seemed clear that we were computing on items that technically did not need to continue to be computed upon (string that reach EOS).
I propose here my refactor of the predict_max function that stop computing over elements that reached EOS. There is probably still room for improvement here.
For a group of 19 sentences over 100 iterations Average tagging time with default: 0.556127781867981 s Median tagging time with default: 0.5420029163360596 Total tagging time with default: 55.612778186798096 s
For a group of 19 sentences over 100 iterations Average tagging time with new: 0.4061899709701538 s Median tagging time with new: 0.40130531787872314 Total tagging time with new: 40.61899709701538 s