Closed yohokuno closed 4 years ago
As the message suggested, just add
if __name__ == '__main__':
freeze_support()
...
@Meteorix Thank you for your advice!
I've added the line from multiprocessing import freeze_support
at the top of the main file, and freeze_support()
at the beginning of main
clause, but got the same error.
For clarity, my Flask
object is defined outside main
clause. Is this causing the problem?
Oh, the doc says freeze_support
is only necessary to produce Windows executable, so it should matter in this case (I am running on Ubuntu 16.04 container).
multiprocessing.freeze_support() Add support for when a program which uses multiprocessing has been frozen to produce a Windows executable. (Has been tested with py2exe, PyInstaller and cx_Freeze.)
please try set mp_start_method="fork"
. It may because you init some global vars that cannot be duplicated when spawn a new process.
https://github.com/ShannonAI/service-streamer/blob/master/service_streamer/service_streamer.py#L258
@Meteorix Thank you for your advice, ManagedModel
with Tensorflow is working in my app finally!
Beside using mp_start_method="fork"
as you suggested, I needed to move import tensorflow
after Streamer()
to prevent initialization before CUDA_VISIBLE_DEVICES
is set.
Another tweak I needed was to increase WORKER_TIMEOUT
hardcoded in service_streamer.py
. I made it configurable and opened Pull Request #76 - please check it out!
Closing this as the problem solved for me.
Hi!
Thanks to the simple API, I succeeded to batchfy my TensorFlow service using
ThreadStreamer
, but could not move on toManagedModel
because ofRuntimeError
at the end of this post.The environment is:
Since I am not familiar with multiprocess programs, I might missing some background knowledge.
Any idea?