-
OpenAI's API supports [request batching](https://platform.openai.com/docs/guides/rate-limits/batching-requests). N users (as of 2023-03-07, N = 3) have expressed interest in batching on the CRFM proxy…
-
This is really awesome. Is it possible to use with Azure endpoints?
https://learn.microsoft.com/en-us/azure/cognitive-services/openai/quickstart?pivots=programming-language-python&tabs=command-lin…
-
### Library name and version
Azure.AI.OpenAI.Assistants 1.0.0-beta.4
### Describe the bug
We have a .Net Core API and we have installed Azure.AI.OpenAI.Assistants 1.0.0-beta.4 nuget package.
We ar…
-
## Initial support added in https://github.com/danny-avila/LibreChat/pull/2781
> 5/19/24
Read the PR notes for more details on what changed, and what is still to be done.
---
### Discussed…
-
Add support for raising a custom Wagtail AI rate limit exceptions.
I'm not aware of any existing support for rate limiting within wagtail and unsure what library would be preferable to use here, so…
-
**Describe the bug:**
I can't switch to any gpt-4 model because aicommit exceeds the rate limits even though rate limit is set to 20RPM
> Rate limit reached for gpt-4 in organization org-XXXXX on …
-
### Module path
gpt github review
### review-gpt CLI version
0.9.4
### Describe the bug
The workflow fails when running the `Run source .env/bin/activate` sub step in `Run microsoft/gpt…
-
报错位置chatbot的get answer函数:
result = self.chatchain({
"question": query,
"chat_history": chat_history_for_chain
},
return_only_outputs=True)
-
### Problem Description
有如下情景:
经过 one-api 得到 openai 请求格式的 gemini-pro 模型,即请求格式类似:
~~~http
POST {{one_base}}/v1/chat/completions
Content-Type: application/json
Authorization: Bearer {{one_key}}
…
-
**Is your feature request related to a problem? Please describe.**
Hide azure's api key instead of showing it in plaintext.
**Describe the solution you'd like**
A clear and concise description of…