jina-ai / clip-as-service

🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
https://clip-as-service.jina.ai
Other
12.43k stars 2.07k forks source link

stop on :use device cpu, load graph from /tmp/tmpldru3y61 #173

Open kedimomo opened 5 years ago

kedimomo commented 5 years ago

stop in this step long time , have some issue ? why?
must be use GPU is it ?

hanxiao commented 5 years ago

If pip install -U bert-serving-server bert-serving-client does not solve your problem, then

  1. please fill in the issue form, saving time for both of us;
  2. please run server with -verbose and copy paste the last screen here.
kedimomo commented 5 years ago
  1. use this command : bert-serving-start -model_dir chinese_L-12_H-768_A-12/ -num_worker=1 -pooling_strategy=REDUCE_MEAN_MAX -cpu -max_batch_size 16 -verbose

2.last screen : 2019-01-08 10:02:45.629528: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2019-01-08 10:02:45.638923: I tensorflow/core/common_runtime/process_util.cc:69] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance. I:GRAPHOPT:[gra:opt:121]:load parameters from checkpoint... I:GRAPHOPT:[gra:opt:123]:freeze... INFO:tensorflow:Froze 181 variables. INFO:tensorflow:Converted 181 variables to const ops. I:GRAPHOPT:[gra:opt:126]:optimize...:

kedimomo commented 5 years ago

I already use pip install -U bert-serving-server bert-serving-client ,then run ' bert-serving-start -model_dir chinese_L-12_H-768_A-12/ -num_worker=1 -pooling_strategy=REDUCE_MEAN_MAX -cpu -max_batch_size 16 -verbose'

kedimomo commented 5 years ago

@hanxiao 不好意思呢 ,我用阿里云 单核2g 双核4g 双核8g都试过 还是不行 ,都是卡住了 出不来那个listen 那句话. Thanks♪(・ω・)ノ

hanxiao commented 5 years ago

looks like a tensorflow problem, which version of tensorflow are you using?

please fill in the issue form, also please refer to #163 and check if it's a tensorflow problem

kedimomo commented 5 years ago

(rasa_chatbot_cn) [root@izwz9h57hlxvofmi9b5ufvz envs]# pip freeze absl-py==0.6.1 astor==0.7.1 bert-serving-client==1.6.6 bert-serving-server==1.6.6 certifi==2018.11.29 gast==0.2.0 GPUtil==1.4.0 grpcio==1.16.1 h5py==2.9.0 Keras-Applications==1.0.6 Keras-Preprocessing==1.0.5 Markdown==3.0.1 mkl-fft==1.0.6 mkl-random==1.0.2 numpy==1.15.4 protobuf==3.6.1 pyzmq==17.1.2 scipy==1.1.0 six==1.12.0 tensorboard==1.12.1 tensorflow==1.12.0 termcolor==1.1.0 Werkzeug==0.14.1

I use conda , then I go to see you give me address , it seem not suitable, the console has not print error massage , only stop here.

I:GRAPHOPT:[gra:opt:121]:load parameters from checkpoint... I:GRAPHOPT:[gra:opt:123]:freeze... INFO:tensorflow:Froze 181 variables. INFO:tensorflow:Converted 181 variables to const ops. I:GRAPHOPT:[gra:opt:126]:optimize...

thanks

yesxiaoyu commented 5 years ago

The same question occured, when I used cpu. Could you please tell me what's your solution, now.

kedimomo commented 5 years ago

@yesxiaoyu only use aliyun cloud server will display this question , if user myself computer even use cpu is not disappear this problem , sorry I have not solution this problem

yesxiaoyu commented 5 years ago

@chenbaicheng Thanks for your reply. Now I found that I can't use cased_L-24_H-1024_A-16 on service, but cased_L-12_H-768_A-12 and chiness_L-12_H-768_A-12 are both ok via CPU. I just wonder that my 16g RAM is not enough. Thanks.