google / automl

Google Brain AutoML
Apache License 2.0
6.25k stars 1.45k forks source link

Support EfficientDet lite1 inference batchsize? #1141

Open RobertKrajnak opened 2 years ago

RobertKrajnak commented 2 years ago

hello, I try to use the pre-trained tflite model efficientdet lite1 from (TFLite (efficientdet/lite1/detection/default)).

I run inference using multiple images at once, while in the output I still get result for 1 image, for example, shape detection classes (1,25), where I should get (n, 25) according to n number of batch images.

Where could the problem be? Is it possible to use batchsize inference on efficientdet lite1 models?

my code is as follows: ` interpreter = tf.lite.Interpreter(model_path="path") input_details = interpreter.get_input_details() output_details = interpreter.get_output_details()

interpreter.resize_tensor_input(input_details[0]['index'], batch_input.shape) interpreter.resize_tensor_input(output_details[0]['index'], batch_input.shape) input_index = interpreter.get_input_details()[0]["index"] interpreter.allocate_tensors() interpreter.set_tensor(input_index, batch_input)

interpreter.invoke()

output_details = interpreter.get_output_details() detection_boxes = interpreter.get_tensor(output_details[0]['index']) detection_classes = interpreter.get_tensor(output_details[1]['index']) detection_scores = interpreter.get_tensor(output_details[2]['index']) num_boxes = interpreter.get_tensor(output_details[3]['index']) ` Thanks