-
Getting the following error
```
2024-10-17 01:53:56.311 | WARNING | pr_agent.algo.ai_handlers.litellm_ai_handler:chat_completion:234 - Error during LLM inference: litellm.BadRequestError: AzureEx…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a…
-
### Describe the feature
Add https://openrouter.ai/ there are many models I use all the time
-
## Goal
> Note: This Epic has changed multiple times, as our architecture has also changed
> A lot of the early comments are referring to a different context
> e.g. "Provider Abstraction" in Jan
…
-
Getting the following error
{"text": "Error during LLM inference: litellm.BadRequestError: OpenAIException - Unsupported value: 'messages[0].role' does not support 'system' with this model.\n", "re…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I have s…
-
# Feature Request
I want to try using the new [o1](https://openai.com/o1/) family models from OpenAI, but OWUI seems to filter the list of available OpenAI models down to those with "gpt" in the na…
-
Hi! Very nice work you did with this plugin. Haven't you manage to package it for easy install?
I think it could be interesting to allow the user to switch model in the chat, or have the same quer…
-
When using https://aimlapi.com/ gpt-o1 model - telegram bot send full json from query instead of only text answer from field `content`
```
ME, [08.11.2024 13:54]
кто ты и что умеешь?
AI, [08.1…
-
o1 support was added in response to
- #570
However, this hardwires the streamability to named o1 models. I am accessing o1-preview via a (litellm) proxy, so I get a `'message': 'litellm.BadRequ…