bclswl0827 / ChatGemini

✨ ChatGemini 是一个基于 Google Gemini 的网页客户端,对标 ChatGPT 3.5,操作逻辑同 ChatGPT 3.5 一致,同时支持在聊天中上传图片,应用会自动调用 Gemini-Pro-Vision 模型进行识图。
http://ibcl.us/ChatGemini/
MIT License
912 stars 246 forks source link

gemini-1.5-pro出现API错误 #30

Open wansenlyt opened 6 months ago

wansenlyt commented 6 months ago

已经申请到gemini-1.5-pro使用资格几天了,没想到在今天使用ChatGemini 时,提示错误:[GoogleGenerativeAI Error]: Error fetching from https://generativelanguage.googleapis.com/v1/models/gemini-pro:streamGenerateContent?alt=sse: [403 ] Method doesn't allow unregistered callers (callers without established identity). Please use API Key or other form of API consumer identity to call this API。

到aistudio.google.com去查询,不知道什么时候已将我使用的模型默认改为了gemini-1.5-pro,在aistudio.google.com的API管理页面上看到测试接口地址改为:https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=YOUR_API_KEY

本人是通过docker部署的ChatGemini可否通过REACT_APP_GEMINI_API_URL配置项来修改接口地址呢? 同时在此谢谢开发者的无私贡献。

chunzha1 commented 6 months ago

gemini 1.5有API可以用了吗?我看好像还只能在谷歌的网页上用

wansenlyt commented 6 months ago

还是原来的api,但使用的模型直接默认就改为了1.5,api接口也由原来的.../v1/...改为了.../v1beta/...。

chunzha1 commented 6 months ago

curl https://generativelanguage.googleapis.com/v1beta/models?key=$API_KEY { "models": [ { "name": "models/chat-bison-001", "version": "001", "displayName": "PaLM 2 Chat (Legacy)", "description": "A legacy text-only model optimized for chat conversations", "inputTokenLimit": 4096, "outputTokenLimit": 1024, "supportedGenerationMethods": [ "generateMessage", "countMessageTokens" ], "temperature": 0.25, "topP": 0.95, "topK": 40 }, { "name": "models/text-bison-001", "version": "001", "displayName": "PaLM 2 (Legacy)", "description": "A legacy model that understands text and generates text as an output", "inputTokenLimit": 8196, "outputTokenLimit": 1024, "supportedGenerationMethods": [ "generateText", "countTextTokens", "createTunedTextModel" ], "temperature": 0.7, "topP": 0.95, "topK": 40 }, { "name": "models/embedding-gecko-001", "version": "001", "displayName": "Embedding Gecko", "description": "Obtain a distributed representation of a text.", "inputTokenLimit": 1024, "outputTokenLimit": 1, "supportedGenerationMethods": [ "embedText", "countTextTokens" ] }, { "name": "models/gemini-1.0-pro", "version": "001", "displayName": "Gemini 1.0 Pro", "description": "The best model for scaling across a wide range of tasks", "inputTokenLimit": 30720, "outputTokenLimit": 2048, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 0.9, "topP": 1, "topK": 1 }, { "name": "models/gemini-1.0-pro-001", "version": "001", "displayName": "Gemini 1.0 Pro 001 (Tuning)", "description": "The best model for scaling across a wide range of tasks. This is a stable model that supports tuning.", "inputTokenLimit": 30720, "outputTokenLimit": 2048, "supportedGenerationMethods": [ "generateContent", "countTokens", "createTunedModel" ], "temperature": 0.9, "topP": 1, "topK": 1 }, { "name": "models/gemini-1.0-pro-latest", "version": "001", "displayName": "Gemini 1.0 Pro Latest", "description": "The best model for scaling across a wide range of tasks. This is the latest model.", "inputTokenLimit": 30720, "outputTokenLimit": 2048, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 0.9, "topP": 1, "topK": 1 }, { "name": "models/gemini-1.0-pro-vision-latest", "version": "001", "displayName": "Gemini 1.0 Pro Vision", "description": "The best image understanding model to handle a broad range of applications", "inputTokenLimit": 12288, "outputTokenLimit": 4096, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 0.4, "topP": 1, "topK": 32 }, { "name": "models/gemini-pro", "version": "001", "displayName": "Gemini 1.0 Pro", "description": "The best model for scaling across a wide range of tasks", "inputTokenLimit": 30720, "outputTokenLimit": 2048, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 0.9, "topP": 1, "topK": 1 }, { "name": "models/gemini-pro-vision", "version": "001", "displayName": "Gemini 1.0 Pro Vision", "description": "The best image understanding model to handle a broad range of applications", "inputTokenLimit": 12288, "outputTokenLimit": 4096, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 0.4, "topP": 1, "topK": 32 }, { "name": "models/embedding-001", "version": "001", "displayName": "Embedding 001", "description": "Obtain a distributed representation of a text.", "inputTokenLimit": 2048, "outputTokenLimit": 1, "supportedGenerationMethods": [ "embedContent" ] }, { "name": "models/aqa", "version": "001", "displayName": "Model that performs Attributed Question Answering.", "description": "Model trained to return answers to questions that are grounded in provided sources, along with estimating answerable probability.", "inputTokenLimit": 7168, "outputTokenLimit": 1024, "supportedGenerationMethods": [ "generateAnswer" ], "temperature": 0.2, "topP": 1, "topK": 40 } ] } V1beta貌似也只有1.0pro

do02fw commented 6 months ago

1.5还不能用

wansenlyt commented 6 months ago

1.5还不能用

谢谢你

GitHubChrisChen8035 commented 6 months ago

遇到相同问题,我可以在gemini中强制使用1.0的model吗?或者大概什么时候可以实现支持到1.5呢?