Open cflavsAmbev opened 4 months ago
I tried following two issues reported 1337 and 2298 and implemented the following code
from keras_cv import ops
from keras_cv.models.object_detection.yolo_v8.yolo_v8_detector import YOLOV8Detector, decode_regression_to_boxes, dist2bbox, get_anchors
BOX_REGRESSION_CHANNELS = 64
preds = model.outputs[0]
model.outputs[0] = tf.reshape(preds,
[-1, 4, BOX_REGRESSION_CHANNELS // 4])
model.outputs[0] = tf.linalg.matmul(keras.backend.softmax(model.outputs[0], axis=-1),
keras.backend.arange(BOX_REGRESSION_CHANNELS // 4, dtype="float32")[..., None])
model.outputs[0] = tf.squeeze(model.outputs[0], -1)
anchor_points, stride_tensor = get_anchors(image_shape=model.input_shape[1:3])
stride_tensor = keras.backend.expand_dims(stride_tensor, axis=-1)
model.outputs[0] = dist2bbox(model.outputs[0], anchor_points) * stride_tensor
model = tf.keras.Model(inputs=model.inputs, outputs=model.outputs)
model.summary()
I saved the model as onnx file but when I perform a sess.run I get this exception.
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Sub node. Name:'model_7/tf.math.subtract_5/Sub' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:629 onnxruntime::Broadcaster::Broadcaster(gsl::span
@sachinprasadhs do you have any updates on this issue?
I'm having the same issue. I tried the solution here and I run into the same problem:
Error: SessionRun(Msg("Non-zero status code returned while running Sub node. Name:'YOLOv8_1/Sub' Status Message: /home/runn
er/work/ort-artifacts-staging/ort-artifacts-staging/onnxruntime/onnxruntime/core/providers/cpu/math/element_wise_ops.h:666
onnxruntime::Broadcaster::Broadcaster(gsl::span
I'm using onnxruntime 1.19.0 to do the inference. The model was produced with tf2onnx v1.16.1 with opset set to 18. If I lower the opset to 13, I instead get the error:
Error: SessionRun(Msg("Non-zero status code returned while running Add node. Name:'YOLOv8_1/Add' Status Message: /home/runn
er/work/ort-artifacts-staging/ort-artifacts-staging/onnxruntime/onnxruntime/core/providers/cpu/math/element_wise_ops.h:666
onnxruntime::Broadcaster::Broadcaster(gsl::span
I think I have isolated the problem.
With the solution proposed here, the YOLOV8Detector model is wrapped and the prediction decoding is done after the fact with the helper functions from keras_cv.
The steps are as follows:
Get the anchor points from the image by passing in the width and height of the image:
anchor_points, stride_tensor = get_anchors(image_shape=model.input_shape[1:3])
This results in two tensors with the following shapes:
anchor_points: tf.Tensor([], shape=(0, 2), dtype=float32)
stride_tensor: tf.Tensor([], shape=(0, 1), dtype=float32)
Which is where the problems begin.
Next, decode the predictions with:
decoded = decode_regression_to_boxes(regression)
where regression has the shape:
<KerasTensor shape=(None, None, 64), dtype=float32, sparse=False, name=keras_tensor_295>
resulting in decoded with a shape of:
<KerasTensor shape=(None, None, 4), dtype=float32, sparse=True, name=keras_tensor_300>
Which is the expected shape (4 values for x1, y1, x2, y2)
Next, get the distance to the bounding boxes with:
boxes = dist2bbox(decoded, anchor_points) * stride_tensor
This results in this shape for the boxes:
boxes <KerasTensor shape=(None, 0, 4), dtype=float32, sparse=True, name=keras_tensor_306>
Which is incorrect. It should have the shape (None, None, 4). Because it doesn't, it results in the runtime session crashing with the error from my last comment.
It results in this shape because in the dist2bbox method this happens:
def mydist2bbox(distance, anchor_points):
left_top, right_bottom = ops.split(distance, 2, axis=-1)
# left_top: <KerasTensor shape=(None, None, 2), dtype=float32, sparse=False, name=keras_tensor_301> (makes sense, 2 values for each prediction)
# right_bottom: <KerasTensor shape=(None, None, 2), dtype=float32, sparse=False, name=keras_tensor_302> (makes sense, 2 values for each prediction)
x1y1 = anchor_points - left_top
# x1y1: <KerasTensor shape=(None, 0, 2), dtype=float32, sparse=False, name=keras_tensor_303> (this is wrong because it gets subracted from the anchor_points which has an unexpected shape as I said before)
x2y2 = anchor_points + right_bottom
# x2y2: <KerasTensor shape=(None, 0, 2), dtype=float32, sparse=False, name=keras_tensor_304> (same problem as x2y2)
return ops.concatenate((x1y1, x2y2), axis=-1) # xyxy bbox
This results in the shape KerasTensor shape=(None, 0, 4), dtype=float32, sparse=True, name=keras_tensor_306>
for the boxes which is not expected. Is there a way to make the shape of the boxes (None, None, 4) as expected?
I figured out the problem. The input shape is variable (None, None, None, 3) (batch size, width, height, rgb). When this is passed to the get_anchors, it doesn't know how large to make the resulting tensor resulting in a tensor of shape (0,2) as I described. If I instead fix the image size (in my case 1280 to 1024), every call to get anchors will instead have a shape of (26880, 2), which is the same size as the number of bounding boxes, which means that the add and subtract operations in the wrapped model can broadcast properly, eliminating the errors when inferencing. Hopefully this is helpful to someone else.
I trained a yolo xs model and exported as onnx file.
I created the inference session by following the code below
When executing the code, the output_names are ['box', 'class']. However, when I check each output, I get box shapes as (1, 8400, 64) and raw_scores shape equal to (1, 8400, 6). Checking the box I have an array of 64 values, including negative values. How can I extract the bouding boxes from this output?