-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
-
### What is the issue?
The streamed chat-completion response from ollama's openai-compatible API repeats `"role": "assistant"` in all returned chunks. This is different to OpenAI's API which just has…
-
The current AzureOpenAI auto-configuration code assume that the users will provide a static api key if no key is set, then an exception is throw see https://github.com/spring-projects/spring-ai/blob/c…
-
Hello everyone.
I am trying to use this library with Azure's Cognitive OpenAi deployment. I am new to this stuff but after creating my openai resource and deployment it appears that the endpoint do…
-
```
rag = LightRAG(
working_dir=WORKING_DIR,
llm_model_func=azure_openai_complete(
azure_endpoint="XXXX",
azure_deployment="gpt-4o",
openai_api_version="XXXX",
…
-
### What happened?
The [docs](https://docs.litellm.ai/docs/providers/openai#optional-keys---openai-organization-openai-api-base) say that litellm supports the `OPENAI_ORGANIZATION` env variable. Bu…
-
How to use OpenAI Model? (GPT-4)
-
File "/opt/venv/lib/python3.11/site-packages/openai/_base_client.py", line 921, in request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 -…
-
OpenAI released a new api feature called [Predicted Outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs). It's designed to reduce latency when most of the outpu…