Open Aashish-1008 opened 3 years ago
Are you using Tensorflow Object Detection API to train your model?
I am using google ai platform for training which uses a docker image "gcr.io/cloud-ml-algos/image_object_detection:latest" stored in google container registry (GCR). Please check it out below link, I am using this guide for training, model deployment, and inference.
https://cloud.google.com/ai-platform/training/docs/algorithms/object-detection
Hello,
I am using cloud ml engine built-in docker image for object detection training "gcr.io/cloud-ml-algos/image_object_detection:latest" Training and inference (for a single image) are working fine in the cloud ml engine, there is no issue.
But when I am trying to do batch inference for more than one image at a time, model is inferring the same output for all the input images. I am using cloud ml engine for inference:
Model deployed in cloud ml engine with this config:
request.json
command to make inference request to ml engine:
let's assume inference for the first image whose key:1 is "INFERENCE_RESULT_1"
MODEL INFERENCE RESULT::
These two input images used for prediction are different and output different predictions when I am making inference requests separately, but while making the request in the batch (of more than one) model is predicting the same output for all the images.
Thanks in advance.