-
### Bug
训练正常,但是使用`inference.py`推理或者转换ONNX时报错:
```
Traceback (most recent call last):
File "/home/code/Relation-DETR/inference.py", line 165, in
inference()
File "/home/user/code/Relati…
-
I am using ONNX version 1.14.0 and tensorrt version 7.1.3.0
I am having an issue in converting the onnx model generated using the notebook example given in the repo to tensorrt model. Below is the er…
-
Can we turn this model to be used in real time streaming like what has been done in the Zipformer Model.
I saw some implementations for making the streamed Whisper https://github.com/ufal/whisper_str…
-
Hi,
Could you write a python script to convert those pth model to onnx?
Thanks!
-Scott
-
Description at this location: https://huggingface.co/TensorStack/Flux_schnell-f16-onnx
Says to run: var pipeline = FluxPipeline.CreatePipeline("D:\\Models\\Flux_schnell-f32-onnx");
However FluxP…
-
### Voice Changer Version
tried both MMVCServerSIO_win_onnxdirectML-cuda_v.1.5.3.18a and MMVCServerSIO_win_onnxgpu-cuda_v.1.5.3.18a
### Operational System
Windows 10
### GPU
XFX RX 6800…
-
Thank you for the awesome work!
Can you tell me on which datasets were the onnx models trained on?
-
```
[optimizer.cpp::computeCosts::1981] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[Reshape_419 + Transpose_420...Gather_2423]}
```
I have an ONNX model…
-
When converting ONNX to TensorRT, how to let the output shape determined by the input value instead of the input size?
-
When I try to use `compile_model` with CUDA as the specified device, I encounter the following error. Is there a way to resolve this, or is the `lora.py` code not yet compatible with running on a GPU?…