Closed jnissin closed 2 years ago
Additionally, if I use the Keras model api with keras model .predict function and batch_size = 1 I get the following error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,0,84] vs. shape[1] = [1,1,84] [Op:ConcatV2] name: concat
I think this is due to different images having different number of detected bounding boxes 0 vs. 1. Is there any way around this?
Hey,
I have been successfully using the YOLOv4 and YOLOv4-tiny for inference with batch size 1 with an input tensor shape of
[1, 416, 416, 3]
, however, if I try to run the model with batch size > 1, I always encounter the following error:This was received using batch size of 2 resulting in input tensor shape of
[2, 416, 416, 3]
, can anyone point me in a right direction with respect to what I am doing wrong or is the model even supposed to work when attempting to run inference with batch size above one?Here is some of the code I am using for model loading and inference:
Is there possibly something that I need to change e.g. under
core/config.py
to make the model work with different batch sizes? I am confused because the input layer according to keras model summary seems to have the batch dimension defined as None which would indicate that the model supports variable batch sizes: