ShannonAI / service-streamer

Boosting your Web Services of Deep Learning Applications.
Apache License 2.0
1.22k stars 187 forks source link

tf.keras 1.15.0+Streamer+keras_bert报错 #55

Closed yongzhuo closed 4 years ago

yongzhuo commented 4 years ago

错误: TypeError: can't pickle _thread.RLock objects

环境: win10, tensorflow=1.15.0

代码: 1.加载模型:

self.model = model_from_json(open(path_graph, "r", encoding="utf-8").read(),
                                     custom_objects=custom_objects)
self.model.load_weights(path_model)
  1. 预测:
    self.streamer = Streamer(predict_function_or_model=model,
                                     cuda_devices="-1",
                                     max_latency=0.1,
                                     worker_num=1,
                                     batch_size=32)

    3.实验: 在加载模型的时候设置了with self.sess.as_default():和with self.graph.as_default():也不可以; 但是text_cnn阔以,但是keras_bert不可以 加上tf.reset_default_graph()也不可以

Meteorix commented 4 years ago

有traceback吗?

yongzhuo commented 4 years ago

很奇怪,streamer里边会用到dump么

Traceback (most recent call last):
  File "Macropodus/macropodus/network/service/server_streamer.py", line 155, in <module>
    model_server = ServiceNer(path, cuda_devices="-1", max_latency=0.1, worker_num=1, batch_size=32)
  File "Macropodus/macropodus/network/service/server_streamer.py", line 120, in __init__
    self.streamer_init()
  File "Macropodus/macropodus/network/service/server_streamer.py", line 141, in streamer_init
    batch_size=self.batch_size)
  File "Macropodus\macropodus\network\service\server_base.py", line 264, in __init__
    self._setup_gpu_worker()
  File "Macropodus\macropodus\network\service\server_base.py", line 276, in _setup_gpu_worker
    p.start()
  File "anaconda3\envs\tf115\lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "anaconda3\envs\tf115\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "anaconda3\envs\tf115\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "anaconda3\envs\tf115\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "anaconda3\envs\tf115\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.RLock objects
Meteorix commented 4 years ago

嗯,多进程通信需要用,你可以先改成多线程模式

yongzhuo commented 4 years ago

那应该是tf.keras和keras的问题, 他们的模型都不支持pickle.dump

yongzhuo commented 4 years ago

奇怪,thread模式还是不得行。。。

Exception in thread thread_worker:
Traceback (most recent call last):
  File "anaconda3\envs\tf115\lib\threading.py", line 916, in _bootstrap_inner
    self.run()
  File "anaconda3\envs\tf115\lib\threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "Macropodus\macropodus\network\service\server_base.py", line 165, in run_forever
    handled = self._run_once()
  File "Macropodus\macropodus\network\service\server_base.py", line 194, in _run_once
    model_outputs = self.model_predict(model_inputs)
  File "Macropodus\macropodus\network\service\server_base.py", line 172, in model_predict
    batch_result: List[str] = self._predict(batch_input)
TypeError: 'AlbertBilstmPredict' object is not callable

Traceback (most recent call last):
  File "pycharm\2017.1\PyCharm 2017.1\helpers\pydev\pydevd.py", line 1578, in <module>
    globals = debugger.run(setup['file'], None, None, is_module)
  File "pycharm\2017.1\PyCharm 2017.1\helpers\pydev\pydevd.py", line 1015, in run
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "pycharm\2017.1\PyCharm 2017.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "Macropodus/macropodus/network/service/server_streamer.py", line 159, in <module>
    res = model_server.predict([ques])
  File "Macropodus\macropodus\network\service\server_base.py", line 147, in predict
    ret = self._output(task_id)
  File "Macropodus\macropodus\network\service\server_base.py", line 137, in _output
    batch_result = future.result(20)  # 20s timeout for any requests
  File "Macropodus\macropodus\network\service\server_base.py", line 54, in result
    raise TimeoutError("Task: %d Timeout" % self._id)
TimeoutError: Task: 0 Timeout
JimLee1996 commented 4 years ago

TF和Keras我之前用过是没问题的。正确的解决方案是如果有多卡,用多进程Streamer+ManagedModel;如果单卡就用ThreadedStreamer。前提一定都是在streamer所在进程内初始化模型。

另外一点也是最近发现的就是,mp.Queue在处理大的numpy数组(比如图像等)时pickle和unpickle的代价很大,如果用torch.Tensor就没事儿,因为pickle的是handle。

glsoon commented 3 years ago

TF和Keras我之前用过是没问题的。正确的解决方案是如果有多卡,用多进程Streamer+ManagedModel;如果单卡就用ThreadedStreamer。前提一定都是在streamer所在进程内初始化模型。

另外一点也是最近发现的就是,mp.Queue在处理大的numpy数组(比如图像等)时pickle和unpickle的代价很大,如果用torch.Tensor就没事儿,因为pickle的是handle。

想了解,keras-bert,多卡,如何在streamer所在进程内初始化模型,多谢。