-
### 🐛 Describe the bug
I am utilizing TorchServe for model inference, deployed in a Docker container orchestrated by k8s. The models are externally stored on S3 bucket and are loaded into the model…
-
### 🐛 Describe the bug
About 30~40 seconds after `torchserve --start ...`, it prints some error messages and stops my model.
The client side has not been involved yet.
### Error logs
###…
-
Kubeflow is using `pytorch/torchserve-kfs` image, as the KServe runtime image for serving PyTorch models
https://github.com/kserve/kserve/blob/f7de5e696e8d0e64e3ed2b2493ec64244291a5c9/install/v0.11.…
-
With PyTorch 2.0, the `torch.compile` feature has enhanced PyTorch's performance capabilities. Torchserve, the ML model serving framework developed by PyTorch, offers a flexible architecture for servi…
-
### Your current environment
I am using `torchserve` to spin up the vLLM instance (https://github.com/pytorch/serve?tab=readme-ov-file#-quick-start-llm-deployment-with-docker).
### Model Input D…
-
I'm getting this error while on the start
```
Traceback (most recent call last):
File "/home/kavan/.local/bin/torchserve-dashboard", line 5, in
from torchserve_dashboard.cli import main
…
-
**Please fill in this feature request template to ensure a timely and thorough response.**
## Willingness to contribute
The MLflow Community encourages new feature contributions. Would you or anot…
-
环境:
x86架构 Ubuntu 22.04.4
200G内存20核CPU
3090显卡*6
执行docker-compose -f docker-compose.gpu.yaml up -d报错:
(base) root@ps:/chatTTS/ChatTTS-ui# docker-compose -f docker-compose.gpu.yaml up -d
WARNI…
-
### Bug Description
I am testing the KServe batcher. However, I encountered an issue where the agent shows the following error: "error: unknown flag enable-batcher."
Can anyone help me understan…
amouu updated
3 months ago
-
### 🐛 Describe the bug
2024-06-22T03:41:52,860 [ERROR] W-9000-bloom7b1_1.0 org.pytorch.serve.wlm.WorkerThread - Number or consecutive unsuccessful inference 2
2024-06-22T03:41:52,861 [ERROR] W-9000-…