Open geekyogurt opened 4 years ago
Hi @donghyeonk @jhyuklee, i have hosted BERN on internal server and also observed the batch processing issue. Will this capability be introduced? Could we edit the code at our end? - What are the things to take note of?
The
server.py
does not allow multiple text inputs to be sent. Will this capability be introduced ? Is the underlying batch capability of the models being utilised while inference ?