microsoft / onnxjs

ONNX.js: run ONNX models using JavaScript
Other
1.75k stars 130 forks source link

Fully convolutional network - cannot reuse inference session with different input shape #301

Open hamster3d opened 3 years ago

hamster3d commented 3 years ago

I have a fully convolutional network with variable input shape, namely (None, None, None, 3) After the first inference (with the shape [1,116,32,3]), when I try to provide input with different shape I got a shape validation error: Uncaught (in promise) Error: input tensor[1] check failed: expected shape '[1,116,32,3]' but got [1,205,40,3]

This error doesn't appear if all the consequent requests have the same input shape.

The workaround for now is to reload the model.

fs-eire commented 2 years ago

Thanks for your feedback.

ONNX.js assumes that for a single inference session, the graph is static. ie. the shape of every value node in the graph will not change.

if you have only a few different types of input shapes, you can create separated inference sessions for them; or you can try ONNX Runtime Web, which implements a shader key to resolve this problem. however, if you have a lot of different types of input shapes (say it's 100+), then you may not use WebGL backend. because WebGL backend uses different shader programs for different input shapes. if you have too many different input shapes, this will create too many WebGL programs, which will soon reach the maximum limit and eventually fail the browser.


we are working on migrating ONNX.js to ONNX Runtime Web which offers enhanced user experience and improved performance. Please visit ONNX Runtime Web to get more details.