Closed eladmeir closed 11 months ago
pip show onnx2tf
Name: onnx2tf
Version: 1.18.14
onnx2tf -i yolox_s.onnx -cotof
With the understanding that you are looking at the same conversion log as I am, the output is an exact match. It must be a problem with your logic or TensorFlow v1.
dup: https://github.com/PINTO0309/onnx2tf/issues/507
Frankly, I don't recommend the Protocol Buffer, and I don't think Google is nearly willing to do it either.
Thanks for your quick reply
I wan to point out that I am using TF 2.13, so maybe the "TF<2.10.0" tag is irrelevant Also - I do not think that this is a duplicate of #507 due to the specific code that both of us (that was me on #507 :) ) added and is not available anywhere on this repository. It is a very complex set of lines that without them - one cannot make a good usage of the .pb model.
As for your tip - Protocol Buffer is being used in TF by default, so basically you are suggesting to not use .pb models at all? I was not aware that PBs were not a good practice, could you elaborate and maybe point out to a better solution for TF?
I wan to point out that I am using TF 2.13, so maybe the "TF<2.10.0" tag is irrelevant
The .pb of TF v1.x and the saved_model.pb of TF v2.x have distinctly different specifications. For example, in v1.x .pb, differences in the parameters available for Resize and bugs in TensorFlow's internal implementation are still present. As a splice, Keras is forcibly merged and forcibly redirected to V1 logic via compat.v1.
In other words, we are aware that no maintenance (including bugs in the specification) has been done on the parts of the system where the V1-based internal logic is called.
I'm not going to go back and read TensorFlow code that is two to three years old logic and try to solve the problem.
Oh, I get what you are saying... Thanks for the explanation.
Maybe I will try and see if I could replace my TF V1.x server to something more stable
And once again - thanks for the wonderful work that you are doing here.
If there is no activity within the next two days, this issue will be closed automatically.
Just a quick update, for the sake of future usage - I have found a tiny (but major, obviously) bug on my python NMS, that was replacing the original torch implementation of NMS from the original YoloX repository
After fixing the bug - I was able to assert that the overall YoloX .pth -> .onnx -> .pb (TF1.X format) is almost perfect, meaning identical up to a tiny fraction of an error between the 3 models (the error is negligible for most scenarios, and is almost zero when testing on multiple scenarios of model weights and benchmarks)
Thanks for the support
Excellent. Thank you for sharing.
Issue Type
Others
OS
Linux
onnx2tf version number
1.17.5
onnx version number
1.14.1
onnxruntime version number
1.16.0
onnxsim (onnx_simplifier) version number
0.4.33
tensorflow version number
2.13.1
Download URL for ONNX
https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_s.onnx
Parameter Replacement JSON
Description