-
I am trying to get the system to recognize a custom embedding endpoint to use a special embedding model. The system serving it is openai api compliant with the /v1/embeddings path. My full embedding…
-
OS version: CentOS7
tfserving version: [tensorflow/serving:2.10.0-gpu](https://hub.docker.com/layers/tensorflow/serving/2.10.0-gpu/images/sha256-183724e62d47acc5b9fa93ddbcb7eeedbfb0ead28cbe2a0a6e5fa2…
-
## Description
In addition to having typesense call OpenAI or Google Cloud ML API's, or using the builtin ONNX runtime, it would be _wonderful_ to allow typesense to call custom model serving APIs.…
-
Issue: Inference.py dependencies aren't installed in SageMaker tensorflow serving container.
Resulting error: _**ModuleNotFoundError: No module named 'nltk'**_
**Versioning details**
Sagemak…
-
Hi @nyadla-sys ,
I am using your [generate_tflite_from_whisper.ipynb](https://github.com/nyadla-sys/whisper.tflite/blob/main/models/generate_tflite_from_whisper.ipynb) to generate whisper models in…
-
### Goal
Graphql API serving endpoints based on the `document-model-libs` schema
### Context
Switchboard API (this repository) have to eventually behave based on the business logic developed in t…
-
**Function description:** Automatically select AI server (TFserving, PaddleServing, MindSpore Serving, Tensorflow lite, Paddle lite, MNN, OpenVINO, etc.) according to the hardware platform, and automa…
-
First thanks for author's work. I exprot the saved_model with the export.py. And I want to deploy this model with tensorflow serving and use the rest api to make the prediction. The deployment works w…
-
Create Custom Serving Image with BuildPacks. Then launch the docker image.
```
docker run -ePORT=8080 -p8080:8080 ${DOCKER_USER}/custom-model:v1
```
It prompts "python -m model: command not found…
-
Hello everyone,
I'm currently facing an issue and would greatly appreciate any assistance you can offer.
I have a paddleOCR model that I'm serving through a Docker image based on version 2.5.1 o…