-
I use image nvcr.io/nvidia/tritonserver:23.09-py3-min to build triton ;
I used the following image nvcr.io/nvidia/tritonserver:23.09-py3-min to build triton to compile and install triton. The com…
-
Hi,
Thanks for open sourcing the VSCode extension and the model files. Is there any way you can release the inferencing server code as well? I'd like to host the model myself but VSCode extension s…
-
Dear authors,
Thank you very much for releasing the code of the great work! We are now trying to deploy your model for the mobile robot navigation task. We are using [inference_pretrained.ipynb](ht…
-
I Wonder how to download det_model.onnx and rec_model.onnx? Thank you a lot in advance!
-
**Description**
A clear and concise description of what the bug is.
before calling unloadmodel,memory isbelow:
and after calling unloadmodel,memory isbelow:
**Triton Information**
What vers…
-
### System Info
Apple M2, Sonoma 14.6 (23G80), Python 3.12.5, pandasai 2.2.14
### 🐛 Describe the bug
The getting started example (https://docs.pandas-ai.com/library#smartdataframe) produces a wrong…
-
### System Info
TGI from Docker
text-generation-inference:2.2.0
host: Ubuntu 22.04
NVIDIA T4 (x1)
nvidia-driver-545
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An o…
-
I'd really love to use a library like this to facilitate the transmission of JSON Schemas - or similar, isomorphic or homomorphic schema description for describing JS objects and other structured data…
-
A Triton inference server might be useful for the open-source models
https://github.com/triton-inference-server
-
Hi,
I'm thinking about using the MMdeploy SDK as a backend in the [Triton server](https://github.com/triton-inference-server). It seems that many people would be interested in this usage. Do you h…