Closed skywolf123 closed 3 days ago
To resolve the error "Unsupported value: 'stream' does not support true with this model" when configuring the o1-mini
model using the OpenAI-API-compatible interface, you need to set the stream
parameter to False
. Here is an example of how to configure it:
from core.model_runtime.model_providers.openai_api_compatible.llm.llm import OAIAPICompatLargeLanguageModel
from core.model_runtime.entities.message_entities import SystemPromptMessage, UserPromptMessage
import os
model = OAIAPICompatLargeLanguageModel()
response = model.invoke(
model="o1-mini",
credentials={
"api_key": os.environ.get("YOUR_API_KEY"),
"endpoint_url": "https://api.yourprovider.com/v1/",
"mode": "completion",
},
prompt_messages=[
SystemPromptMessage(
content="You are a helpful AI assistant.",
),
UserPromptMessage(content="Who are you?"),
],
model_parameters={
"temperature": 1.0,
"top_k": 2,
"top_p": 0.5,
},
stop=["How"],
stream=False, # Ensure stream is set to False
user="abc-123",
)
assert isinstance(response, LLMResult)
assert len(response.message.content) > 0
Make sure to replace "YOUR_API_KEY"
and "https://api.yourprovider.com/v1/"
with your actual API key and endpoint URL.
Additionally, ensure that the model's YAML configuration file defines the stream
parameter correctly. The stream
parameter should default to True
, but you can override it by setting it to False
in your invocation [1][2].
To continue talking to Dosu, mention @dosu.
Duplicated https://github.com/langgenius/dify/issues/8615
OpenAI's new models indeed have this issue. Currently, we only have the resources to maintain official mainstream model providers. For compatibility modes, we hope the community can contribute PRs. Additionally, our page uses default streaming output, so it doesn't support the statement mentioned in the previous issue.
Self Checks
Dify version
0.8.3
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
When I configure OpenAI-API-compatible, I get an error when I add the o1-mini model.
Because my o1-mini proxy is different from the gpt-4o proxy, I need to configure it through OpenAI-API-compatible, but the configuration can be used but the error is reported, and when I configure the o1-mini proxy to the OpenAI channel, it is normal, indicating that the configuration of OpenAI's o1-mini and OpenAI-API-compatible channels is different. How do I configure it to use o1-mini in OpenAI-API-compatible?
The following error message is displayed:
✔️ Expected Behavior
No response
❌ Actual Behavior
No response