The BertTokenizer() gets now instantiated globally with the startup of the backend rather than with every single prediction. This is achieved by moving BertTokenizer() from the get_prediction() method to the BinaryClassifier() class.
Setting num_workers=0. So the data loading is performed in the main process now. In contrast to that, values > 0 cause costly creations of new sub-processes of which each requires a complete copy of the DataLoader() in particular. Anyway: For our particular scenario where the dashboard sends exactly one sample per user input, any value > 1 does not make sense at all.
Contains two fixes:
BertTokenizer()
gets now instantiated globally with the startup of the backend rather than with every single prediction. This is achieved by movingBertTokenizer()
from theget_prediction()
method to theBinaryClassifier()
class.num_workers=0
. So the data loading is performed in the main process now. In contrast to that, values > 0 cause costly creations of new sub-processes of which each requires a complete copy of theDataLoader()
in particular. Anyway: For our particular scenario where the dashboard sends exactly one sample per user input, any value > 1 does not make sense at all.Closes #136