-
here is my nvidia-smi result
`python -c "import torch; import tensorrt; import tensorrt_llm"` working well
When a client is connected server is getting core dumped related to libcudnn_cnn_infe…
-
I followed the instructions from the README and the docker image built fine.
However, when I ran it, the WhisperFusion process fails (which makes the webapp not work).
The problem is unfortunate…
-
i run the docker `ghcr.io/collabora/whisperbot-base:latest` and run the server , but when i send the request by client , i faced
client:.
```
[INFO]: * recording
setting
[INFO]: Waiting for ser…
-
**Which OS are you using?**
- OS: [e.g. iOS or Windows.. If you are using Google Colab, just Colab.]
Google Colab
2024-06-07 09:48:55.286611: E external/local_xla/xla/stream_executor/cuda/cuda_dnn…
-
# ComfyUI Error Report
## Error Details
- **Node Type:** UpscalerTensorrt
- **Exception Type:** polygraphy.exception.exception.PolygraphyException
- **Exception Message:** Could not deserialize …
-
# ComfyUI Error Report
## Error Details
- **Node Type:** UpscalerTensorrt
- **Exception Type:** polygraphy.exception.exception.PolygraphyException
- **Exception Message:** Could not deserialize …
-
I am trying to use whisper base.en with tensorrt. I followed the steps and built the weights in the docker, it all worked. But when I try to connect using;
```
from whisper_live.client import Tran…
-
Hello, I did the whole setup guide with setting up docker, cloning the repo and executing the install script.
(I'm using the Nvidia Jetson AGX Orin)
I want to adjust the dockerfile in jetson-contain…
-
I've tried to look through documentation of other issues but have yet to find a clear solution.
using a very simple test
`from whisper_live.client import TranscriptionClient
client = Transcrip…
-
The following values were not passed to `accelerate launch` and had defaults used instead:
`--num_processes` was set to a value of `1`
`--num_machines` was set to a value of `1`
`--mixed_precisi…