Open shimaamorsy opened 7 months ago
Hi there, WebGL will be deprecated in ORT Web soon. Pls use WebGPU for GPU inference with ORT Web. Here are the doc: https://onnxruntime.ai/docs/tutorials/web/ep-webgpu.html and example: https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js/segment-anything
Thank you very much for your response.
I'm having another problem and I'm struggling to find the material to help me for solving my task. I'm trying to speed up the performance of YOLOv5-segmentation using static quantization. I have followed the ONNX Runtime official tutorial on how to apply static quantization
However, I encountered an error when i tried to preprocess the library If you know of another material to help me with my task, I would be very grateful to you.
Thank you very much for your response.
I'm having another problem and I'm struggling to find the material to help me for solving my task. I'm trying to speed up the performance of YOLOv5-segmentation using static quantization. I have followed the ONNX Runtime official tutorial on how to apply static quantization
However, I encountered an error when i tried to preprocess the library If you know of another material to help me with my task, I would be very grateful to you.
you can skip the preprocess to unblock yourself. As for the failure in the shape inference, does your model have non-standard onnx ops?
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Describe the issue
When i tried to inferenceSession on WebGL , I encountered this error
To reproduce
let model = await ort.InferenceSession.create("yolov8n.onnx", { executionProviders: ['webgl'] }); const tensor = new ort.Tensor("float32",new Float32Array(modelInputShape.reduce((a, b) => a * b)),modelInputShape); await model.run({ images: tensor })