UniversalDot / universal-dot-front-end

An API + React based template for building Substrate Front Ends
The Unlicense
1 stars 0 forks source link

Integration Spike of Tensorflow #26

Closed stojanov-igor closed 2 years ago

stojanov-igor commented 2 years ago
stojanov-igor commented 2 years ago

Installed Tensorflow at server 109.235.70.27

curl -d '{"instances": [1.0, 2.0, 5.0]}' \ -X POST http://109.235.70.27:8501/v1/models/half_plus_two:predict

Need to deploy Sentance encoder there and server based on that model.

stojanov-igor commented 2 years ago

systemctl docker restart

sudo docker run -p 8501:8501 --mount type=bind,source=/home/udot_user_01/serving/saved_models,target=/models/my_model -e MODEL_NAME=my_model -t tensorflow/serving

stojanov-igor commented 2 years ago

We can make REST calls to our own servable endpoint at : http://109.235.70.27:8501/v1/models/my_model

Example: curl -d '{"inputs": {"input_20": ["Hello"]}}' -X POST http://localhost:8501/v1/models/my_model:predict

API: Docs: https://www.tensorflow.org/tfx/serving/api_rest Multilangual Servable Model: https://tfhub.dev/google/universal-sentence-encoder-multilingual/3

stojanov-igor commented 2 years ago

There is no documented way to check the REST API of saved_modes at this moment.

We can extract API documentation using saved_model_cli:

example: saved_model_cli show --dir /home/udot_user_01/serving/saved_models/1 --all

`MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']: The given SavedModel SignatureDef contains the following input(s): The given SavedModel SignatureDef contains the following output(s): outputs['__saved_model_init_op'] tensor_info: dtype: DT_INVALID shape: unknown_rank name: NoOp Method name is:

signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['input_19'] tensor_info: dtype: DT_INT32 shape: (-1, -1) name: serving_default_input_19:0 inputs['input_20'] tensor_info: dtype: DT_INT32 shape: (-1, -1) name: serving_default_input_20:0 inputs['input_21'] tensor_info: dtype: DT_INT32 shape: (-1, -1) name: serving_default_input_21:0 The given SavedModel SignatureDef contains the following output(s): outputs['keras_layer_6'] tensor_info: dtype: DT_FLOAT shape: (-1, -1, 768) name: StatefulPartitionedCall:0 outputs['lambda_6'] tensor_info: dtype: DT_FLOAT shape: (-1, 768) name: StatefulPartitionedCall:1 outputs['lambda_6_1'] tensor_info: dtype: DT_FLOAT shape: (-1, 768) name: StatefulPartitionedCall:2 Method name is: tensorflow/serving/predict

Concrete Functions: Function Name: 'call' Option #1 Callable with: Argument #1 DType: dict Value: {'input_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/input_type_ids'), 'input_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/input_mask'), 'input_word_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/input_word_ids')} Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 DType: dict Value: {'input_word_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_19'), 'input_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_20'), 'input_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_21')} Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 DType: dict Value: {'input_word_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_19'), 'input_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_21'), 'input_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_20')} Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 DType: dict Value: {'input_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/input_mask'), 'input_word_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/input_word_ids'), 'input_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/input_type_ids')} Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None

Function Name: '_default_save_signature' Option #1 Callable with: Argument #1 DType: dict Value: {'input_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_20'), 'input_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_21'), 'input_word_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_19')}

Function Name: 'call_and_return_all_conditional_losses' Option #1 Callable with: Argument #1 DType: dict Value: {'input_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_20'), 'input_word_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_19'), 'input_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_21')} Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 DType: dict Value: {'input_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/input_mask'), 'input_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/input_type_ids'), 'input_word_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/input_word_ids')} Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 DType: dict Value: {'input_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/input_mask'), 'input_word_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/input_word_ids'), 'input_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/input_type_ids')} Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 DType: dict Value: {'input_word_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_19'), 'input_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_20'), 'input_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_21')} Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None`

stojanov-igor commented 2 years ago

Spike implemented with Mulitilingual Universal-sentance-encoder

API available at: http://109.235.70.27:8501/v1/models/use-multilingual

Steps:

  1. Run docker on port 8501.

sudo docker run -p 8501:8501 --mount type=bind,source=/home/udot_user_01/serving/saved_models/use-multilingual,target=/models/use-multilingual -e MODEL_NAME=use-multilingual -t tensorflow/serving

  1. This makes the model available to query at endpoint: http://109.235.70.27:8501/v1/models/use-multilingual
  2. Inpsect API inputs outputs with: saved_model_cli show --dir /home/udot_user_01/serving/saved_models/use-multilingual/3 --all
  3. Curl with some input to receive predictions: curl -d '{"instances": [["the","quick","brown"],["the","lazy","dog"]]}' -X POST http://localhost:8501/v1/models/use-multilingual:predict
stojanov-igor commented 2 years ago

Integration Spike completed. See previous post for current way of deploying saved models.

Work to be continued in: https://github.com/UniversalDot/universal-dot-front-end/issues/33