Closed Vedant-R closed 3 years ago
Hi,
We've put some effort in serving TF DF models using tensorflow serving. We found that the issue was the missing OP for inference from TFDF in tensorflow serving.
Adding the OP as a custom OP and recompiling the project let us to the point that the model is loaded as it is supposed to be, at least based on the logs.
Model loaded with 300 root(s), 3936 node(s), and 7 input feature(s).
Successfully loaded servable version {name: penguin_model version: 1}
However, when calling for inference the server crashes with a memory leak.
what(): std::bad_alloc
start_direct_model_server: line 1: 121064 Aborted (core dumped) bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server -v=4 --rest_api_port=9000 --model_name=penguin_model --model_base_path="gs://xxxxxxxxxx-ai_platform/penguin_model"
For more documentation: see the issue https://github.com/tensorflow/serving/issues/1865
Our code base can be found on https://github.com/picousse/tensorflow-serving-tfdf.
But be aware that the current code base is not working.
Feedback on this would indeed be interesting.
@julienschuermans
@Vedant-R
Hi,
As picousse@ correctly commented, the custom inference ops have to be linked to your binary.
@picousse picousse
Your work and reports is impressive :). Unfortunately, I was not aware about the issue you created on tf/serving, but now I do :).
Let me answer there.
for future reference: https://github.com/tensorflow/serving/pull/1887
Hi,
I have built the TF DF model and I am trying to serve it using Docker, I am using the following commands:
I am getting the following issue:
Any solution for this? Thank you!!!