Thanks for the great work of your repo.
However, it's not working under pytorch1.1.0 and raise RuntimeError.
RuntimeError: Failed to export an ONNX attribute, since it's not constant, please try to make things (e.g., kernel size) static if possible
Using pytorch 1.2.0
I uncommented the two lines in backbone.py and the code is like
# if you just want to convert to onnx, you can cancel the two lines of comments
# or, if you want convert to tvm, just return regression and classification
anchors = self.anchors(inputs, inputs.dtype)
return features, regression, classification, anchors
# return regression, classification
This time, torch.onnx would work, but the print output was strange.
Thanks for the great work of your repo. However, it's not working under pytorch1.1.0 and raise RuntimeError.
Using pytorch 1.2.0 I uncommented the two lines in backbone.py and the code is like
This time, torch.onnx would work, but the print output was strange.
Don't know why there is
!
inFloat(1, 46917!, 4!)
.