zzh8829 / yolov3-tf2

YoloV3 Implemented in Tensorflow 2.0
MIT License
2.51k stars 905 forks source link

Doing multiple batch inference? #367

Open nikkkkhil opened 3 years ago

nikkkkhil commented 3 years ago

I have deployed the yolov3 object detection model on the TF server. I can successfully do inference on the single image now I want to test server capacity for multiple batches of images but when I try to pass multiple images I get an error as "Can not squeeze dim[0], expected a dimension of 1, got 6\n\t [[{{node yolov3/yolo_nms/Squeeze}}]]" this line throwing an error in models.py Does this model support multiple batch inferences?

load_imgs = load_images_from_dir("/content/yolov3-tf2/image_data/",416,6) print(load_imgs.shape) (6, 416, 416, 3)

request.inputs["input"].CopyFrom( tf.make_tensor_proto( load_imgs, dtype= types_pb2.DT_FLOAT , shape=load_imgs.shape ) )

Can I pass an arbitrary number of images to the model which is trained on different batch size? or is it hardcoded to specific batch size? or am I calling it in a wrong way?

mauricioCS commented 3 years ago

Hi @nikkkkhil ! I was facing the exactly same problem when I tried to predict images divided in batches with size > 1...

During my research I found this issue #92 , marked with tags "inference" and "enhancement", so I don't know if this was already implemented or not.

Now I still studying how to solve this and I found some material that I think might be useful:

I don't know if this is the correct approach to solve the problem, but if I find the solution, I'll share here.

I hope that you can progress too!

talenterj commented 2 years ago

@mauricioCS have you get any idea

mauricioCS commented 1 year ago

Sorry about my delay @talenterj

I couldn't solve the problem at that time. I've had to run individual image predictions.

My goal was use this model in Google Cloud TPU. To do this I divided my dataset in groups of 8 images, because I was using the TPU v2-8 version with 8 cores, maintaining the 1:1 proportion.