-
## Intro
## Status Checks From Last Call
- Guidance re: repo ownership / etc (Thom + Hunter)
- Onchain ticket distinguishing using CallData (Victor + Rick)
- Streaming workflow architecture update…
-
**问题描述**
使用 Cloudflare AI gateway 建立 endpoint 后,在 chatbox 的设置画面设定到 API 域名后,使用报错了。
错误的内容是chatbox拼接网址中出现了两个v1,Cloudflare AI gateway 建立 endpoint被调用时自己会加一个v1,而您的代码 [src/packages/llm.ts](https://github.c…
-
We've been coding against the Ollama API internally and eventually it hit me .. Ollama should be able to support third-party API providers, making it a de-facto gateway to LLMs.
For example, it wou…
-
### Description
When using memory=True for a crew that uses Azure Open AI, there is an error creating long term memory.
### Steps to Reproduce
```
import os
from chromadb.utils.embedding_…
-
### The Feature
Prompt caching is harder to trigger when litellm load balances across several deployments (using Azure as an example). If the litellm gateway is configured with, say, 3 deployments fo…
-
/kind feature
**Description**
Currently, the RawDeployment mode in KServe doesn't support scaling to and from 0. Integrating with KEDA and KEDA-HTTP could enable this functionality and provide ad…
-
### What happened?
Description:
I attempted to integrate [PortKey](https://portkey.ai/) with LiteLLM in two ways. Here’s a summary of the steps and outcomes:
**Attempt 1:** Configuring PortKey …
-
### Is there an existing issue for the same bug?
- [X] I have checked the existing issues.
### Describe the bug and reproduction steps
1 - Follow installations steps ...
2 - Get error msg `SANDBOX…
-
### Describe your problem
![图片](https://github.com/user-attachments/assets/7de1d9b7-8377-4639-884e-4c1931e52910)
在生产端,做了一个ragflow的部署,set api key 通义千问的大模型时,出现了一些错误。请帮忙分析一下具体是什么原因
-
14:40:48,572 graphrag.llm.openai.create_openai_client INFO Creating OpenAI client base_url=http://localhost:11434/v1
14:40:49,322 graphrag.index.llm.load_llm INFO create TPM/RPM limiter for mistral: …