jina-ai / clip-as-service

🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
https://clip-as-service.jina.ai
Other
12.43k stars 2.07k forks source link

use win10 ,the server start in venv,but client can't connect to the server #338

Open songxh2 opened 5 years ago

songxh2 commented 5 years ago

run in pycharm terminal like below:

Microsoft Windows [版本 10.0.17134.706] (c) 2018 Microsoft Corporation。保留所有权利。

(venv) F:\PycharmProjects\bert>bert-serving-start -model_dir F:\PycharmProjects\bert\model\chinese_L-12_H-768_A-12 -num_worker=1 usage: F:\PycharmProjects\bert\venv\Scripts\bert-serving-start -model_dir F:\PycharmProjects\bert\model\chinese_L-12_H-768_A-12 -num_worker=1 ARG VALUE


       ckpt_name = bert_model.ckpt
     config_name = bert_config.json
            cors = *
             cpu = False
      device_map = []

fixed_embed_length = False fp16 = False gpu_memory_fraction = 0.5 graph_tmp_dir = None http_max_connect = 10 http_port = None mask_cls_sep = False max_batch_size = 256 max_seq_len = 25 model_dir = F:\PycharmProjects\bert\model\chinese_L-12_H-768_A-12 num_worker = 1 pooling_layer = [-2] pooling_strategy = REDUCE_MEAN port = 5555 port_out = 5556 prefetch_size = 10 priority_batch_size = 16 show_tokens_to_client = False tuned_model_dir = None verbose = False xla = False

I:?[35mVENTILATOR?[0m:freeze, optimize and export graph, could take a while... I:?[36mGRAPHOPT?[0m:model config: F:\PycharmProjects\bert\model\chinese_L-12_H-768_A-12\bert_config.json I:?[36mGRAPHOPT?[0m:checkpoint: F:\PycharmProjects\bert\model\chinese_L-12_H-768_A-12\bert_model.ckpt I:?[36mGRAPHOPT?[0m:build graph...

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see:

I:?[36mGRAPHOPT?[0m:load parameters from checkpoint... I:?[36mGRAPHOPT?[0m:optimize... I:?[36mGRAPHOPT?[0m:freeze... I:?[36mGRAPHOPT?[0m:write graph to a tmp file: C:\Users\ADMINI~1\AppData\Local\Temp\tmp2khn4vju I:?[35mVENTILATOR?[0m:optimized graph is stored at: C:\Users\ADMINI~1\AppData\Local\Temp\tmp2khn4vju I:?[35mVENTILATOR?[0m:bind all sockets I:?[35mVENTILATOR?[0m:open 8 ventilator-worker sockets I:?[35mVENTILATOR?[0m:start the sink I:?[32mSINK?[0m:ready I:?[35mVENTILATOR?[0m:get devices W:?[35mVENTILATOR?[0m:no GPU available, fall back to CPU I:?[35mVENTILATOR?[0m:device map: worker 0 -> cpu I:?[33mWORKER-0?[0m:use device cpu, load graph from C:\Users\ADMINI~1\AppData\Local\Temp\tmp2khn4vju


up is the end of the server output, not like the document wrote. my client can't connect to this server, when i debug into the client, it is waitting forever at the line:

bc = BertClient() # ip address of the GPU machine,如果是本机,可以不填

songxh2 commented 5 years ago

i want to find the reason ,but how can i debug the server code in pycharm ? although the server run in cli command line.

IndoorsNumber31 commented 5 years ago

你好,我也遇到这个问题。观察任务管理器发现是工作进程启动失败了。我把num_worker设为1,并配置使用cpu运行后,成功启动了服务。具体可以看看server模块的BertWorker中的_run()方法。

songxh2 commented 5 years ago

后来我debug进去了,也一直找不到原因,最后我怀疑内存8G可能少了点,于是又买了条8G内存,插上后,运行ok!!原来是内存不足。 正常运行后是这样的输出:

.... I:VENTILATOR:get devices W:VENTILATOR:no GPU available, fall back to CPU I:VENTILATOR:device map: worker 0 -> cpu I:WORKER-0:use device cpu, load graph from C:\Users\ADMINI~1\AppData\Local\Temp\tmp2pdfg7vm I:WORKER-0:ready and listening! I:VENTILATOR:all set, ready to serve request! I:VENTILATOR:new config request req id: 1 client: b'f6082f24-5c91-4efb-91b7-e058b786e16e' I:SINK:send config client b'f6082f24-5c91-4efb-91b7-e058b786e16e' I:VENTILATOR:new encode request req id: 2 size: 3 client: b'f6082f24-5c91-4efb-91b7-e058b786e16e' I:SINK:job register size: 3 job id: b'f6082f24-5c91-4efb-91b7-e058b786e16e#2' I:WORKER-0:new job socket: 0 size: 3 client: b'f6082f24-5c91-4efb-91b7-e058b786e16e#2' I:WORKER-0:job done size: (3, 768) client: b'f6082f24-5c91-4efb-91b7-e058b786e16e#2' I:SINK:collect b'EMBEDDINGS' b'f6082f24-5c91-4efb-91b7-e058b786e16e#2' (E:3/T:0/A:3) I:SINK:send back size: 3 job id: b'f6082f24-5c91-4efb-91b7-e058b786e16e#2' I:VENTILATOR:new encode request req id: 3 size: 3 client: b'f6082f24-5c91-4efb-91b7-e058b786e16e' I:SINK:job register size: 3 job id: b'f6082f24-5c91-4efb-91b7-e058b786e16e#3' I:WORKER-0:new job socket: 0 size: 3 client: b'f6082f24-5c91-4efb-91b7-e058b786e16e#3' I:WORKER-0:job done size: (3, 768) client: b'f6082f24-5c91-4efb-91b7-e058b786e16e#3' I:SINK:collect b'EMBEDDINGS' b'f6082f24-5c91-4efb-91b7-e058b786e16e#3' (E:3/T:0/A:3) I:SINK:send back size: 3 job id: b'f6082f24-5c91-4efb-91b7-e058b786e16e#3'

songxh2 commented 5 years ago

你好,我也遇到这个问题。观察任务管理器发现是工作进程启动失败了。我把num_worker设为1,并配置使用cpu运行后,成功启动了服务。具体可以看看server模块的BertWorker中的_run()方法。

我的已经把num_worker设为1了的,增加内存后正常了

Jeffyangchina commented 5 years ago

你也是win10嘛 可以告诉我软件的匹配版本嘛或者你有修改代码吗? 我的启动服务器时会报错:I:?[35mVENTILATOR?[0m:freeze, optimize and export graph, could take a while... I:?[36mGRAPHOPT?[0m:model config: F:\DL\Constant-TL\BERT\chinese_L-12_H-768_A-12\bert_config.json I:?[36mGRAPHOPT?[0m:checkpoint: F:\DL\Constant-TL\BERT\chinese_L-12_H-768_A-12\bert_model.ckpt E:?[36mGRAPHOPT?[0m:fail to optimize the graph! Traceback (most recent call last): File "d:\dl\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "d:\dl\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\DL\anaconda3\Scripts\bert-serving-start.exe__main.py", line 9, in File "d:\dl\anaconda3\lib\site-packages\bert_serving\server\cli__init__.py", line 4, in main with BertServer(get_run_args()) as server: File "d:\dl\anaconda3\lib\site-packages\bert_serving\server\init.py", line 70, in init__ self.graph_path, self.bert_config = pool.apply(optimize_graph, (self.args,)) TypeError: cannot unpack non-iterable NoneType object

IndoorsNumber31 commented 5 years ago

@Jeffyangchina 你好 我没有改过代码, 操作系统是WIN10 64位 家庭版 Python3.6 64位 用的虚拟环境是 venv 开发工具是 pycharm 包与依赖版本如下: bert-serving-server==1.8.7 pyzmq==17.1.0 GPUtil==1.4.0 termcolor==1.1.0 numpy==1.16.3 six==1.11.0

asankasan commented 4 years ago

I'm having the same issue. Did anyone manage to fix this?

zhsuiy commented 4 years ago

I had the same issue. The reason might be the lack of memory due to the running servers on the machine. Here is what I did. I found all the pid(s) by the port and port_out of all the previous servings in command line, then mannually killed them. After this, when I run bert-serving-start, the log ends with 'all set, ready to serve request! '