-
Can you provide a way to inference it with onnx?
This way we'll be able to use the GPU and much less dependencies and also it will be easier to adapt it to other languages such as Rust.
Thanks!
-
Please, write a code, which converts GigaAM-RNNT to ONNX and launchs it, I am trying to do id for a long time without any results(((
-
Would it be possible to share the onnx exports?
-
请问如何转onnx呢,我这边尝试转onnx,一直报:
![image](https://github.com/user-attachments/assets/388fcd99-793d-4971-81bf-e3013834233d)
-
Can we have an onnx version?
-
### Search before asking
- [X] I have searched the X-AnyLabeling [model_zoo](https://github.com/CVHub520/X-AnyLabeling/blob/main/docs/en/model_zoo.md) and found no similar model requests.
### Descr…
-
The docs are a bit unclear - what version should I download for using with Torch, ONNX, and CUDA 12.4?
Do I need to download multiple versions?
-
Which one is faster? I wish to run multiple yolo11 models on my P40 gpu with the shortest inference time, how should I run the model?
rabum updated
2 weeks ago
-
Since I’m looking to experiment with running TransPose, on mobile devices, we conversion ONNX model is fail ,what's the reason ,who can help me
-
Specifically, we want to be able to run the following two models:
- [ ] llama3
- [ ] sdxl