squeaky-pl / japronto

Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.
MIT License
8.62k stars 580 forks source link

ML model serving with Japronto #162

Open sathishbabu96 opened 4 years ago

sathishbabu96 commented 4 years ago

Is japronto good enough to serve multiple ML models parallelly with multicore support? Most of the python servers use a multithreaded approach and need a WSGI server to use multiple cores. Does japronot support multiprocessing out of the box?