tobegit3hub / simple_tensorflow_serving

Generic and easy-to-use serving service for machine learning models
https://stfs.readthedocs.io
Apache License 2.0
757 stars 193 forks source link

where <client.py>?? #23

Closed Johnson-yue closed 6 years ago

Johnson-yue commented 6 years ago

Hi, your repo is very nice ,and I am interested in it when I look once. When I install sucessfully and type simple_tensorflow_serving --model_base_path= "./models/tensorflow_template_appliaction_model" --gen_client= "python"

LOG_HERE:

`(py35) ➜ ~/Deep_Learning/tmp/simple_tensorflow_serving(master)$ simple_tensorflow_serving --model_base_path="./models/tensorflow_template_application_model" --gen_client="python"

2018-09-05 14:49:46 INFO reload_models: False 2018-09-05 14:49:46 INFO model_name: default 2018-09-05 14:49:46 INFO model_config_file: 2018-09-05 14:49:46 INFO auth_username: admin 2018-09-05 14:49:46 INFO download_inference_images: True 2018-09-05 14:49:46 INFO enable_colored_log: False 2018-09-05 14:49:46 INFO gen_client: python 2018-09-05 14:49:46 INFO custom_op_paths: 2018-09-05 14:49:46 INFO log_level: info 2018-09-05 14:49:46 INFO auth_password: admin 2018-09-05 14:49:46 INFO enable_auth: False 2018-09-05 14:49:46 INFO bind: 0.0.0.0:8500 2018-09-05 14:49:46 INFO host: 0.0.0.0 2018-09-05 14:49:46 INFO enable_cors: True 2018-09-05 14:49:46 INFO model_base_path: ./models/tensorflow_template_application_model 2018-09-05 14:49:46 INFO port: 8500 2018-09-05 14:49:46 INFO debug: False 2018-09-05 14:49:46 INFO model_platform: tensorflow 2018-09-05 14:49:46.225473: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2018-09-05 14:49:46 INFO Put the model version: 1 online, path: ./models/tensorflow_template_application_model/1 INFO:tensorflow:Restoring parameters from ./models/tensorflow_template_application_model/1/variables/variables 2018-09-05 14:49:46 INFO Restoring parameters from ./models/tensorflow_template_application_model/1/variables/variables 2018-09-05 14:49:46 INFO Put the model version: 2 online, path: ./models/tensorflow_template_application_model/2 INFO:tensorflow:Restoring parameters from ./models/tensorflow_template_application_model/2/variables/variables 2018-09-05 14:49:46 INFO Restoring parameters from ./models/tensorflow_template_application_model/2/variables/variables (py35) ➜ ~/Deep_Learning/tmp/simple_tensorflow_serving(master)$ `

The next step is python ./client.py, I think this client.py file should be generate by up command , but it failed. I do not know why.

Though I found it in /python_client/client.py , but I was still puzzled. What went wrong?

tobegit3hub commented 6 years ago

Thanks for your report. @Johnson-yue

The document is out-of-data and we don't generate the clients with the model. I will update the README soon and provide the correct usage of this.

Johnson-yue commented 6 years ago

@tobegit3hub , I try three way to install simple_tensorflow_serving and test (pip, source code, docker). In pip and source code , it search for _tensorflow why not tensorflow-gpu__? the simple_tensorflow_serving is not support gpu?? and I have issue #23 and #24 . In docker way, it always run docker run -d -p 8500:8500 tobegit3hub/simple_tensorflow_serving , I can not stop it and run other command such as simple_tensorflow_serving --model_config_file="./examples/model_config_file.json" because the port in used . I do not know how to stop it, So I just stop docker contained

tobegit3hub commented 6 years ago

I have updated README.md and you can try with these commands to generate client.py.

curl http://localhost:8500/v1/models/default/gen_client?language=python > client.py

Answer the other questions:

  1. Simple TensorFlow Serving supports GPUs. All you need to do is installing tensorflow-gpu instead of tensorflow. It always import tensorflow no matter you use GPU or CPU.
  2. You can run with docker run -it -p 8500:8500 tobegit3hub/simple_tensorflow_serving bash so that you can run other commands in your terminal.
  3. It throws port in used if you have run multiple containers with local port of 8500. You can stop the container with docker ps and docker stop $id instead of shutting down the docker daemon.