-
If for some reason the llm service fails to respond. Jobs might get stuck forever
See logs :
```bash
linto_llm-gateway.1.1qeqpejy2j4n@linagora-linto-bm-02 | 25/11/2024 15:46:13 http_server I…
-
Unable to use the [oneAPI](https://github.com/songquanpeng/one-api))API gateway built in an internal environment.
Retrieve the following URL from this [webpage](https://www.cursor.com/security#clie…
-
We should have two listeners one for prompt and other for llm routing.
Here is a list of listeners we have today,
- 10000 prompt (to create traceparent)
- 10001 prompt - hosts prompt gatreway w…
-
# Architecture
This document outlines the architecture of the AI Nutrition-Pro application, including system context, containers, and deployment views. The architecture is depicted using C4 diagram…
-
# Architecture
This document outlines the architecture of the AI Nutrition-Pro application, including system context, containers, and deployment views. The architecture is depicted using C4 diagram…
-
# Architecture
This document outlines the architecture of the AI Nutrition-Pro application, including system context, containers, and deployment views. The architecture is depicted using C4 diagram…
-
Path: /qstash/integrations/llm
When using a custom LLM provider it doesn't seem like the Helicone integration works. I think this is related to the url being used for completion.
For example, w…
-
*Description*:
The [kubernetes-sigs/llm-instance-gateway](https://github.com/kubernetes-sigs/llm-instance-gateway) project has introduced a new backendRef called [LLMServerPool](https://github.com/…
-
# Architecture
This document outlines the architecture of the AI Nutrition-Pro application, including system context, containers, and deployment views. The architecture is depicted using C4 diagram…
-
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.13-nikolaik
m@DESKTOP-VKFHU30:~$ sudo docker run -it --privileged --rm --pull=always \
> --network host \
> -e LLM_API_KEY="ollama" \
LLM…