-
I noticed that your code loads the onnx model and uses the CPU by default.
You can add the following code to support gpu loading model and inference, which will be much faster than cpu.
Use GPU:
…
-
When the model uses blfloat16 ops, the optimizer fails with the following. We should handle custom types form onnx in `_constant_folding`
```pytb
Traceback (most recent call last):
File "/works…
-
When I tested it with reference to the prepare_pose/README.md document, the following error occurred when I ran DWPose
```python
Traceback (most recent call last):
File "inference_video.py", li…
-
See https://github.com/opencv/opencv/pull/26056
The layers are the following:
- [x] Concat
- [x] ConstantOfShape
- [x] [WIP] Einsum (done by @Abdurrahheem)
- [x] Expand
- [x] Gather
- [ ] G…
-
I trained an inpainting model which has `torch.rfftn` / `torch.irfftn` modules and accepts image data with shape-[b, 4, h, w]. For some reason the `torch.onnx.export` can't export operators with compl…
-
I have tried several combinations, but while I can train Lora on this branch, I can't create captions.
I need to activate a separate 'master' branch process for tagging.
I get the below when tryin…
-
I'm able to build my Rust app on my mac and with an Android target without sherpa-rs included, but once it is included , it fails at `Compiling sherpa-rs-sys v0.1.9`.
I've tried regular build with …
-
hi, can you provide a script to convert your models to onnx format, thanks a lot !
-
### 🚀 The feature, motivation and pitch
It is sometimes useful for basic perf testing to be able to execute a third-party given ONNX file in different backends. Currently there exist several such e…
-
### ❓ Question
Hi,
I am looking into the use of ONNX with SB3. I have tested 2 models (A2C and PPO) on a custom environment using a MultiInputActorCriticPolicy. The observation space of the envir…