Closed jiafatom closed 3 years ago
Any chance this will solve it ? - https://github.com/microsoft/onnxruntime/pull/4271
Yes it does resolve the issue for stride=1
case. For stride>1
, need onnx spec change, so raise an issue there. We can close this onnxruntime issue once the above mentioned PR gets checked in.
Makes sense, thank David. Btw - do all the new 4 keras models need stride > 1 ? Or is there s a model that uses stride == 1 ? If there is a real model that uses stride == 1 and auto_pad attribute, we can run a final verification and merge that fix in.
These four models don't use stride=1. They use strides=2 (this is the most common case) unet_plusplus unet center-net (this include actually two models when enable nms or not)
Quick clarification though - do we need the auto_pad attribute ? Can we use explicit padding or is it because the model consumes dynamic shaped inputs ?
Yes it is because the model consumes dynamic shaped inputs, so we have to use auto_pad.
Closing as the ORT change supporting auto_pad
in ConvTranspose is checked-in
Describe the bug ConvTranspose output_shape seems wrong for dynamic input shape when auto_pad='SAME_LOWER'
Urgency none
System information master
To Reproduce I have a keras model and convert to ONNX. See attached please remove 'zip' at the end. conv_transpose.onnx.zip The python code running ORT:
The output shape is (2,3,6,8), but keras output shape is (2,2,2,8). With some debug, the issue is that
output_shape
inComputePadsAndOutputShape()
is unknown for auto_pad='SAME_LOWER'. In this case, it seems wrong to applyout_size = (in_size - 1) * stride + adj + (kernel - 1) * dilation + 1;
because there is no guarantee thatinput_size
=output_size
which is suggested bySAME_LOWER
.