microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.86k stars 2.94k forks source link

[Web] Onnxruntime web result compare to Python Onnx #22953

Open ueihgnurt opened 5 days ago

ueihgnurt commented 5 days ago

Describe the issue

Onnx Web Inference. Image Python + Onnx Inference. Image This is what happen if I remove the Conv2d layer from the model Image

As you can see the the input is right. so that the output is still right if there no conv2d layer. The problem is when I put the conv2d layer in. All of them use float32 I wonder what causing this issue

To reproduce

The Model: Image

Urgency

No response

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.20.1

Execution Provider

'wasm'/'cpu' (WebAssembly CPU)

skottmckay commented 5 days ago

Example code for how you're running the model in python and ORT web would be helpful.

ueihgnurt commented 5 days ago

Example code for how you're running the model in python and ORT web would be helpful. The dummy code is here. git@github.com:ueihgnurt/test_onnx_web.git

skottmckay commented 4 days ago

https://github.com/ueihgnurt/test_onnx_web/blob/main/python_examples/python_example.ipynb is using opencv not onnxruntime to run the model afaics so there's no direct comparison to ORT web. I would suggest checking that the pre and post processing you're doing of the input/output is correct for the onnx model.

Additionally, compare the raw float32 values produced for a given set of input values to remove any pre/post processing differences from the equation.

ueihgnurt commented 4 days ago

https://github.com/ueihgnurt/test_onnx_web/blob/main/python_examples/python_example.ipynb is using opencv not onnxruntime to run the model afaics so there's no direct comparison to ORT web. I would suggest checking that the pre and post processing you're doing of the input/output is correct for the onnx model.

Additionally, compare the raw float32 values produced for a given set of input values to remove any pre/post processing differences from the equation.

So I tried using this code

Image

result is the same as opencv one. I didn't even process anything. The error here only occur if I add conv2d into the model. you can test the model with this https://github.com/ueihgnurt/test_onnx_web/blob/main/vueexamples/public/onnxdinov2/test_empty_model.onnx it contain no conv2d just return the origin image. in this onnx model. both the web version and the python version return the same result.

ueihgnurt commented 4 days ago

I mean what I did was only divide the RGB to 255.0 then put it into the model. How could the different be this big between web version and python version.

ueihgnurt commented 4 days ago

Describe the issue

Onnx Web Inference. Image Python + Onnx Inference. Image This is what happen if I remove the Conv2d layer from the model Image

As you can see the the input is right. so that the output is still right if there no conv2d layer. The problem is when I put the conv2d layer in. All of them use float32 I wonder what causing this issue

To reproduce

The Model: Image

Urgency

No response

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.20.1

Execution Provider

'wasm'/'cpu' (WebAssembly CPU)

I don't think the result up here is related too much with the float32 since you can clearly see the conv2d literally grid origin image into 3x3 parts and positions is shuffled. it's clearly not a conv2d. or at least how I think it work.