dominikmn / one-million-posts

Assisting newspaper moderators with machine learning.
MIT License
2 stars 1 forks source link

Speed up inference #136

Closed dominikmn closed 3 years ago

dominikmn commented 3 years ago

Current situation

The current inference (inference = prediction on live user data in production) is unnecessary slow. For each prediction of a sentence the implementation does currently require approx. 3,5 seconds.

Possible solutions