lobehub / lobe-chat

🤯 Lobe Chat - an open-source, modern-design LLMs/AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / Bedrock / Azure / Mistral / Perplexity ), Multi-Modals (Vision/TTS) and plugin system. One-click FREE deployment of your private ChatGPT chat application.
https://chat-preview.lobehub.com
Other
34.85k stars 8.19k forks source link

gemini-1.5 and gemini-ultra api are now available #1769

Closed Masque423 closed 3 months ago

Masque423 commented 3 months ago

🥰 Feature Description

The APIs for gemini-1.5-pro and gemini-ultra are now available for use, and I hope to add more interfaces. Additionally, it seems that there might be some issues with the original gemini-1.5 as my API does not support calls to gemini-1.5, but I can still receive successful responses when selecting the gemini 1.5 model.

🧐 Proposed Solution

I don't know.

📝 Additional Information

The API address of Gemini varies depending on the model, changing the model implies changing the request address.

lobehubbot commented 3 months ago

👀 @Masque423

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. Please make sure you have given us as much context as possible.\ 非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

Masque423 commented 3 months ago
curl https://generativelanguage.googleapis.com/v1beta/models?key=$API_KEY     -H 'Content-Type: application/json'     -X GET
{
  "models": [
    {
      "name": "models/gemini-1.5-pro-latest",
      "version": "001",
      "displayName": "Gemini 1.5 Pro",
      "description": "Mid-size multimodal model that supports up to 1 million tokens",
      "inputTokenLimit": 1048576,
      "outputTokenLimit": 8192,
      "supportedGenerationMethods": [
        "generateContent",
        "countTokens"
      ],
      "temperature": 2,
      "topP": 0.4,
      "topK": 32
    },
    {
      "name": "models/gemini-pro",
      "version": "001",
      "displayName": "Gemini 1.0 Pro",
      "description": "The best model for scaling across a wide range of tasks",
      "inputTokenLimit": 30720,
      "outputTokenLimit": 2048,
      "supportedGenerationMethods": [
        "generateContent",
        "countTokens"
      ],
      "temperature": 0.9,
      "topP": 1,
      "topK": 1
    },
    {
      "name": "models/gemini-pro-vision",
      "version": "001",
      "displayName": "Gemini 1.0 Pro Vision",
      "description": "The best image understanding model to handle a broad range of applications",
      "inputTokenLimit": 12288,
      "outputTokenLimit": 4096,
      "supportedGenerationMethods": [
        "generateContent",
        "countTokens"
      ],
      "temperature": 0.4,
      "topP": 1,
      "topK": 32
    },
    {
      "name": "models/gemini-ultra",
      "version": "001",
      "displayName": "Gemini 1.0 Ultra",
      "description": "The most capable model for highly complex tasks",
      "inputTokenLimit": 30720,
      "outputTokenLimit": 2048,
      "supportedGenerationMethods": [
        "generateContent",
        "countTokens"
      ],
      "temperature": 0.9,
      "topP": 1,
      "topK": 32
    }
  ]
}
GalileoFe commented 3 months ago

image its max_token should be 1024k,but in fact only 30k allowed, and max_output is 2048 token, so I guess in lobe-chat, we do not req for gemini1.5 pro,but gemini 1.0 pro.

arvinxx commented 3 months ago

I think I need a key to test… or I can't work with it. can anyone borrow me a gemini-1.5-pro key?

lobehubbot commented 3 months ago

✅ @Masque423

This issue is closed, If you have any questions, you can comment and reply.\ 此问题已经关闭。如果您有任何问题,可以留言并回复。