microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.71k stars 2.93k forks source link

[Javascript ] inferenceSession on WebGL #20224

Open shimaamorsy opened 7 months ago

shimaamorsy commented 7 months ago

Describe the issue

When i tried to inferenceSession on WebGL , I encountered this error

webgl

To reproduce

  1. Download Yolov8n onnx model here MODEL
  2. Run this HTML page in a webserver (LiveServer in Visual Studio Code fi): `
    
    <script src="https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/ort.webgl.min.js"></script>

let model = await ort.InferenceSession.create("yolov8n.onnx", { executionProviders: ['webgl'] }); const tensor = new ort.Tensor("float32",new Float32Array(modelInputShape.reduce((a, b) => a * b)),modelInputShape); await model.run({ images: tensor })



### Urgency

Yes , i should solve this error immediately

### Platform

Windows

### OS Version

10 

### ONNX Runtime Installation

Built from Source

### ONNX Runtime Version or Commit ID

1.17.1

### ONNX Runtime API

Python

### Architecture

X64

### Execution Provider

Default CPU

### Execution Provider Library Version

Webgl

### Model File

_No response_

### Is this a quantized model?

No
EmmaNingMS commented 7 months ago

Hi there, WebGL will be deprecated in ORT Web soon. Pls use WebGPU for GPU inference with ORT Web. Here are the doc: https://onnxruntime.ai/docs/tutorials/web/ep-webgpu.html and example: https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js/segment-anything

shimaamorsy commented 7 months ago

Thank you very much for your response.

I'm having another problem and I'm struggling to find the material to help me for solving my task. I'm trying to speed up the performance of YOLOv5-segmentation using static quantization. I have followed the ONNX Runtime official tutorial on how to apply static quantization

However, I encountered an error when i tried to preprocess the library image If you know of another material to help me with my task, I would be very grateful to you.

yufenglee commented 7 months ago

Thank you very much for your response.

I'm having another problem and I'm struggling to find the material to help me for solving my task. I'm trying to speed up the performance of YOLOv5-segmentation using static quantization. I have followed the ONNX Runtime official tutorial on how to apply static quantization

However, I encountered an error when i tried to preprocess the library image If you know of another material to help me with my task, I would be very grateful to you.

you can skip the preprocess to unblock yourself. As for the failure in the shape inference, does your model have non-standard onnx ops?

shimaamorsy commented 7 months ago

Thank you for replying .

No ,

These are the models I am trying to quantize: yolov5n-seg.onnx

yolov8n-seg.onnx

github-actions[bot] commented 6 months ago

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.