Closed ziadloo closed 1 year ago
I can also add that training the model using the Keras script will result in the same issue. All you need to do is to use the following script to train the model:
python ~/repo/tfx/examples/chicago_taxi_pipeline/taxi_pipeline_native_keras.py
But be aware that I had to change a line of code in that script to make it work. Line 48 originally reads:
_module_file = os.path.join(_taxi_root, 'taxi_utils_native_keras.py')
And I had to change it to:
_module_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'taxi_utils_native_keras.py')
Perhaps, I can make it a PR. Only if someone could help me understand why these are failing, then I could fix them all in a single PR. That would be great.
I managed to get a prediction response successfully by manually editing the CSV file. The CSV file is used to compose the request payload (as the data that is fed into the inference engine). The tfx/examples/chicago_taxi_pipeline/serving/chicago_taxi_client.py
script which constructs the request, uses the first 3 records of the CSV file by encoding them into a proto-buffer.
So far, all is good. It's just that the first 3 records have some fields missing. This results in a proto-buffer payload with missing features. Apparently, TensorFlow serving does not like missing features and it crashes. If you ask me, this needs to be fixed from the Tensorflow serving side but that does not mean that this project is all safe.
The fact that someone has chosen that specific dataset to train and test this library with is something that needs to be revisited here. I mean a working example is a must, don't you think?
Bottom line:
The reason why sending the inference request crashes the serving is that the payload has missing features in it.
Hi, @ziadloo
Apologies for the delay and Good to hear that you're able to solve your issues with Tensorflow serving and certainly we'll look into it and will make sure our examples upto date and working, thank you for your valuable suggestions, We really appreciate your efforts and time
If you're looking to explore more things with TFX end to end pipeline, here is good example and for Tensorflow serving here and you can refer official documentation here with more serving options.
If your issue got resolved, Could you please close this issue? or if you need any further assistance, please let us know ?
Thank You!
Hi, @ziadloo
Closing this issue due to lack of recent activity for couple of weeks. Please feel free to reopen the issue, if you need any further assistance or update
Thank you!
System information
pip freeze
output):Describe the current behavior
After training a model using the
python tfx/examples/chicago_taxi_pipeline/taxi_pipeline_local.py
command, I see thesaved_model.pb
file being created. Then I try to test the model by sending an inference request. While the serving container starts successfully, when I try to request an inference, the container crashes with the following error message:Describe the expected behavior
I don't know if this is a bug within the training pipeline, the serving container, or the client code sending the request. But the container should return the response without crashing.
Standalone code to reproduce the issue
1. Set up an environment with python 3.8.15 and the given packages. 2. Create the folders:
3. Download the dataset:
4. Clone the repo:
5. Run the training script:
6. Make the serving script executable:
7. Run the serving container:
8. As mentioned in issue #3563, the file
tfx/examples/chicago_taxi_pipeline/serving/chicago_taxi_client.py
needs to be edited before we can proceed. Currently, line 185 reads:while it should be:
9. Make the bash script file executable:
10. Run the inference bash script:
Please pay attention that the location of the
schema.pbtxt
file on your computer might be different (5 is an auto-generated number and each time you run the training pipeline, a new folder is generated).Once you send the request, the server crashes, and the container exits.