Open shreemoyee opened 1 year ago
@shreemoyee, what operators miss float64 implementation for your ONNX model? Which is the execution provider (CPU or CUDA)?
Hi! I am using CPU. I am just trying to inference (run an inference session) with double numpy inputs on a pytorch model converted to onnx format. Hope that helps.
On Tue, 27 Jun 2023, 18:28 Tianlei Wu, @.***> wrote:
@shreemoyee https://github.com/shreemoyee, what operators miss float64 implementation for your ONNX model? Which is the execution provider (CPU or CUDA)?
— Reply to this email directly, view it on GitHub https://github.com/microsoft/onnxruntime/issues/16494#issuecomment-1609942760, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADL6OE3LLLB73T2Z7PK3JELXNMJ5NANCNFSM6AAAAAAZVHJSRU . You are receiving this because you were mentioned.Message ID: @.***>
What types your model is expecting? You can use https://netron.app/ to visualize it.
Hi @yuslepukhin thanks for the advcie -- I have a sequential network with 6 layers with 100 neurons and activation SeLu. I cast my pytorch model to float and convert to ONNX. Now when I run with ORT, I get the error: NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for Selu(6) node with name '/net/9/Selu'
@shreemoyee , HI!, I also encountered a similar problem. Have you resolved it? I get the error: NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for Conv(11) node with name '/features/features.0/Conv'
Describe the feature request
I am trying to use ONNX Runtime in inferencing where preision is very important. We are training the model in pytorch and saving the weights in double. And exporting this model to onnx. However looks like onnx runtime does not support double weights since it throws : Unexpected input data type. Actual: (tensor(double)) , expected: (tensor(float)) Note that: Our pytorch model is sequential, 6 layers deep, with SiLu activation.
Describe scenario use case
We are using ONNX Runtime to run inferencing in financial data where extreme precision is needed to get the derivatives of smooth function by numerical methods. Float64 will be ideal, not float32.