-
People have asked for how to serve Chemprop models via a web API (see #591). Part of this is how to save both the featurizer/preprocessing in addition to the model for delivering to others. It looks l…
-
Whenever we use external models for feature extraction that are based on torch, it's presumably much more efficient to serve them through [TorchServe](https://pytorch.org/serve/).
-
Hello, I see TorchServe engine support mentioned in the Readme but cannot find any way to actually use it. Is it available?
-
(animated_drawings) root@iZmj7apgrkrc74erbt5rt4Z:~/mywork/AnimatedDrawings/torchserve# docker run -d --name docker_torchserve -p 8080:8080 -p 8081:8081 docker_torchserve
a49cdc409dccf777dda22ef4d316f…
-
=> ERROR [chat-tts-ui internal] load metadata for docker.io/pytorch/torchserve:0.11.0-gpu 35.6s
------
> [chat-tts-ui internal] load metadata for docker.io/pytorch/torc…
-
As far as i understand, it is not possible to easly port this model to torchserve.
Is not possible to export it in torchscript because some parts of the code, and is not possible to export it with a …
-
I am working on deploying the SAM model using TorchServe. My current implementation performs both the image embedding computation and mask prediction in a single request-response cycle, which is not g…
-
# TorchServe 관련 이슈 모음
- torchserve로 serving하는 과정에서 일어나는 모든 기술적 이슈 모음.
- 튜토리얼 과정에서 굉장히 많은 에러가 일어나 작성하게 되었음.
- 에러/해결 방안을 간략히 작성할 것
-
## 🚀 The feature
Implement support for Detectron2 models within the TorchServe object detection examples. This includes:
1. Developing a custom handler that works seamlessly with both CPU and GP…
-
Hello,
I was trying to setup local run on my mac and got this error when running: "torchserve --start --ts-config config.local.properties --foreground" And got a no module named "nvgpu" error.
…