langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
46.41k stars 6.55k forks source link

o1-mini.yaml max output token error, should be 65536 #8381

Closed changjiangbailang closed 1 week ago

changjiangbailang commented 1 week ago

Self Checks

Dify version

0.8.2

Cloud or Self Hosted

Self Hosted (Docker), Self Hosted (Source)

Steps to reproduce

image image

✔️ Expected Behavior

No response

❌ Actual Behavior

No response

dosubot[bot] commented 1 week ago

@changjiangbailang Hello there! I'm here to help you with any bugs, questions, or contributions you may have. Let's tackle this together!

To resolve the max output token error in the o1-mini.yaml configuration file, you need to set the max value for max_tokens to 65563. Here is the corrected configuration:

model: o1-mini
label:
  zh_Hans: o1-mini
  en_US: o1-mini
model_type: llm
features:
  - agent-thought
model_properties:
  mode: chat
  context_size: 128000
parameter_rules:
  - name: max_tokens
    use_template: max_tokens
    default: 65563
    min: 1
    max: 65563
  - name: response_format
    label:
      zh_Hans: 回复格式
      en_US: response_format
    type: string
    help:
      zh_Hans: 指定模型必须输出的格式
      en_US: specifying the format that the model must output
    required: false
    options:
      - text
      - json_object
pricing:
  input: '3.00'
  output: '12.00'
  unit: '0.000001'
  currency: USD

This change sets the max value for max_tokens to 65563, which should resolve the error [1][2].

To continue talking to Dosu, mention @dosu.