lobehub / lobe-chat

🤯 Lobe Chat - an open-source, modern-design AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / Azure / DeepSeek), Knowledge Base (file upload / knowledge management / RAG ), Multi-Modals (Vision/TTS) and plugin system. One-click FREE deployment of your private ChatGPT/ Claude application.
https://chat-preview.lobehub.com
Other
41.47k stars 9.4k forks source link

[Request] 希望适配o1-mini以及o1-preview模型 #3944

Closed Xiaokai6880 closed 1 week ago

Xiaokai6880 commented 1 week ago

🥰 需求描述

由于o1-mini以及o1-preview模型的API接口不支持流式传输及temperature必须为1,无法调用此API模型,希望能够适配完成

🧐 解决方案

希望适配o1-mini以及o1-preview模型

📝 补充信息

image image

lobehubbot commented 1 week ago

👀 @Xiaokai6880

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. Please make sure you have given us as much context as possible.\ 非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

lobehubbot commented 1 week ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


🥰 Description of requirements

Since the API interfaces of o1-mini and o1-preview models do not support streaming and the temperature must be 1, this API model cannot be called. We hope that the adaptation can be completed.

🧐 Solution

Hope to adapt to o1-mini and o1-preview models

📝 Supplementary information

image image

dosubot[bot] commented 1 week ago

@Xiaokai6880 你好!我是Dosu,我是一个可以帮助你解决错误、回答问题和成为贡献者的机器人。我会在等待人类维护者的过程中帮助你。

可以适配 o1-minio1-preview 模型以支持流式传输并允许 temperature 参数设置为非1的值。现有代码已经包含处理流式传输和 temperature 参数的逻辑。

以下是处理 temperature 参数和流式传输的相关代码:

private buildCompletionParamsByModel(payload: ChatStreamPayload) {
  const { model, temperature, top_p, stream, messages, tools } = payload;
  const isVisionModel = model.startsWith('qwen-vl');
  const params = {
    ...payload,
    messages,
    result_format: 'message',
    stream: !!tools?.length ? false : (stream ?? true),
    temperature:
      temperature === 0 || temperature >= 2 ? undefined : temperature === 1 ? 0.999 : temperature, // 'temperature' must be Float
    top_p: top_p && top_p >= 1 ? 0.999 : top_p,
  };

  /* Qwen-vl models temporarily do not support parameters below. */
  /* Notice: `top_p` imposes significant impact on the result,the default 1 or 0.999 is not a proper choice. */
  return isVisionModel
    ? omit(
        params,
        'presence_penalty',
        'frequency_penalty',
        'temperature',
        'result_format',
        'top_p',
      )
    : omit(params, 'frequency_penalty');
}

此外,LobeMoonshotAILobeBaichuanAI 配置展示了如何处理流式传输的 temperature 参数:

import OpenAI from 'openai';

import { ChatStreamPayload, ModelProvider } from '../types';
import { LobeOpenAICompatibleFactory } from '../utils/openaiCompatibleFactory';

export const LobeMoonshotAI = LobeOpenAICompatibleFactory({
  baseURL: 'https://api.moonshot.cn/v1',
  chatCompletion: {
    handlePayload: (payload: ChatStreamPayload) => {
      const { temperature, ...rest } = payload;

      return { 
        ...rest, 
        temperature: 
          temperature !== undefined 
          ? temperature / 2
          : undefined,
      } as OpenAI.ChatCompletionCreateParamsStreaming;
    },
  },
  debug: {
    chatCompletion: () => process.env.DEBUG_MOONSHOT_CHAT_COMPLETION === '1',
  },
  provider: ModelProvider.Moonshot,
});

export const LobeBaichuanAI = LobeOpenAICompatibleFactory({
  baseURL: 'https://api.baichuan-ai.com/v1',
  chatCompletion: {
    handlePayload: (payload: ChatStreamPayload) => {
      const { temperature, ...rest } = payload;

      return {
        ...rest,
        // [baichuan] frequency_penalty must be between 1 and 2.
        frequency_penalty: undefined,
        temperature: temperature !== undefined ? temperature / 2 : undefined,
      } as OpenAI.ChatCompletionCreateParamsStreaming;
    },
  },
  debug: {
    chatCompletion: () => process.env.DEBUG_BAICHUAN_CHAT_COMPLETION === '1',
  },
  provider: ModelProvider.Baichuan,
});

确保 o1-minio1-preview 模型支持流式传输并允许 temperature 参数设置为非1的值,需要确保 model 参数不以 qwen-vl 开头,并且在 payload 中正确设置 temperature 参数。代码已经处理了这些条件。此外,LobeMoonshotAILobeBaichuanAI 配置展示了如何调整流式传输的 temperature 参数,确保设置 temperature 的灵活性[1][2][3]

To continue talking to Dosu, mention @dosu.

lobehubbot commented 1 week ago

✅ @Xiaokai6880

This issue is closed, If you have any questions, you can comment and reply.\ 此问题已经关闭。如果您有任何问题,可以留言并回复。

lobehubbot commented 1 week ago

:tada: This issue has been resolved in version 1.17.0 :tada:

The release is available on:

Your semantic-release bot :package::rocket:

lobehubbot commented 1 week ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Can you please publish to the latest docker image?