Closed coulsontl closed 7 months ago
curl \
> -H 'Content-Type: application/json' \
> -X GET https://generativelanguage.googleapis.com/v1beta/models?key=
{
"models": [
{
"name": "models/chat-bison-001",
"version": "001",
"displayName": "PaLM 2 Chat (Legacy)",
"description": "A legacy text-only model optimized for chat conversations",
"inputTokenLimit": 4096,
"outputTokenLimit": 1024,
"supportedGenerationMethods": [
"generateMessage",
"countMessageTokens"
],
"temperature": 0.25,
"topP": 0.95,
"topK": 40
},
{
"name": "models/text-bison-001",
"version": "001",
"displayName": "PaLM 2 (Legacy)",
"description": "A legacy model that understands text and generates text as an output",
"inputTokenLimit": 8196,
"outputTokenLimit": 1024,
"supportedGenerationMethods": [
"generateText",
"countTextTokens",
"createTunedTextModel"
],
"temperature": 0.7,
"topP": 0.95,
"topK": 40
},
{
"name": "models/embedding-gecko-001",
"version": "001",
"displayName": "Embedding Gecko",
"description": "Obtain a distributed representation of a text.",
"inputTokenLimit": 1024,
"outputTokenLimit": 1,
"supportedGenerationMethods": [
"embedText",
"countTextTokens"
]
},
{
"name": "models/gemini-1.0-pro",
"version": "001",
"displayName": "Gemini 1.0 Pro",
"description": "The best model for scaling across a wide range of tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-001",
"version": "001",
"displayName": "Gemini 1.0 Pro 001 (Tuning)",
"description": "The best model for scaling across a wide range of tasks. This is a stable model that supports tuning.",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens",
"createTunedModel"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-latest",
"version": "001",
"displayName": "Gemini 1.0 Pro Latest",
"description": "The best model for scaling across a wide range of tasks. This is the latest model.",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-vision-latest",
"version": "001",
"displayName": "Gemini 1.0 Pro Vision",
"description": "The best image understanding model to handle a broad range of applications",
"inputTokenLimit": 12288,
"outputTokenLimit": 4096,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.4,
"topP": 1,
"topK": 32
},
{
"name": "models/gemini-pro",
"version": "001",
"displayName": "Gemini 1.0 Pro",
"description": "The best model for scaling across a wide range of tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-pro-vision",
"version": "001",
"displayName": "Gemini 1.0 Pro Vision",
"description": "The best image understanding model to handle a broad range of applications",
"inputTokenLimit": 12288,
"outputTokenLimit": 4096,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.4,
"topP": 1,
"topK": 32
},
{
"name": "models/embedding-001",
"version": "001",
"displayName": "Embedding 001",
"description": "Obtain a distributed representation of a text.",
"inputTokenLimit": 2048,
"outputTokenLimit": 1,
"supportedGenerationMethods": [
"embedContent"
]
},
{
"name": "models/aqa",
"version": "001",
"displayName": "Model that performs Attributed Question Answering.",
"description": "Model trained to return answers to questions that are grounded in provided sources, along with estimating answerable probability.",
"inputTokenLimit": 7168,
"outputTokenLimit": 1024,
"supportedGenerationMethods": [
"generateAnswer"
],
"temperature": 0.2,
"topP": 1,
"topK": 40
}
]
}
curl \
> -H 'Content-Type: application/json' \
> -X GET https://generativelanguage.googleapis.com/v1/models?key=
{
"models": [
{
"name": "models/gemini-1.0-pro",
"version": "001",
"displayName": "Gemini 1.0 Pro",
"description": "The best model for scaling across a wide range of tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-001",
"version": "001",
"displayName": "Gemini 1.0 Pro 001 (Tuning)",
"description": "The best model for scaling across a wide range of tasks. This is a stable model that supports tuning.",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens",
"createTunedModel"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-latest",
"version": "001",
"displayName": "Gemini 1.0 Pro Latest",
"description": "The best model for scaling across a wide range of tasks. This is the latest model.",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-vision-latest",
"version": "001",
"displayName": "Gemini 1.0 Pro Vision",
"description": "The best image understanding model to handle a broad range of applications",
"inputTokenLimit": 12288,
"outputTokenLimit": 4096,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.4,
"topP": 1,
"topK": 32
},
{
"name": "models/gemini-pro",
"version": "001",
"displayName": "Gemini 1.0 Pro",
"description": "The best model for scaling across a wide range of tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-pro-vision",
"version": "001",
"displayName": "Gemini 1.0 Pro Vision",
"description": "The best image understanding model to handle a broad range of applications",
"inputTokenLimit": 12288,
"outputTokenLimit": 4096,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.4,
"topP": 1,
"topK": 32
},
{
"name": "models/embedding-001",
"version": "001",
"displayName": "Embedding 001",
"description": "Obtain a distributed representation of a text.",
"inputTokenLimit": 2048,
"outputTokenLimit": 1,
"supportedGenerationMethods": [
"embedContent"
]
}
]
}
没有这个模型
我的 key 没有权限,无法测试
@songquanpeng 已经可以使用,但是名字是 gemini-1.5-pro-latest
curl 'https://generativelanguage.googleapis.com/v1beta/models?key=xxx'
{
"models": [
{
"name": "models/chat-bison-001",
"version": "001",
"displayName": "PaLM 2 Chat (Legacy)",
"description": "A legacy text-only model optimized for chat conversations",
"inputTokenLimit": 4096,
"outputTokenLimit": 1024,
"supportedGenerationMethods": [
"generateMessage",
"countMessageTokens"
],
"temperature": 0.25,
"topP": 0.95,
"topK": 40
},
{
"name": "models/text-bison-001",
"version": "001",
"displayName": "PaLM 2 (Legacy)",
"description": "A legacy model that understands text and generates text as an output",
"inputTokenLimit": 8196,
"outputTokenLimit": 1024,
"supportedGenerationMethods": [
"generateText",
"countTextTokens",
"createTunedTextModel"
],
"temperature": 0.7,
"topP": 0.95,
"topK": 40
},
{
"name": "models/embedding-gecko-001",
"version": "001",
"displayName": "Embedding Gecko",
"description": "Obtain a distributed representation of a text.",
"inputTokenLimit": 1024,
"outputTokenLimit": 1,
"supportedGenerationMethods": [
"embedText",
"countTextTokens"
]
},
{
"name": "models/gemini-1.0-pro",
"version": "001",
"displayName": "Gemini 1.0 Pro",
"description": "The best model for scaling across a wide range of tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-001",
"version": "001",
"displayName": "Gemini 1.0 Pro 001 (Tuning)",
"description": "The best model for scaling across a wide range of tasks. This is a stable model that supports tuning.",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens",
"createTunedModel"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-latest",
"version": "001",
"displayName": "Gemini 1.0 Pro Latest",
"description": "The best model for scaling across a wide range of tasks. This is the latest model.",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-vision-latest",
"version": "001",
"displayName": "Gemini 1.0 Pro Vision",
"description": "The best image understanding model to handle a broad range of applications",
"inputTokenLimit": 12288,
"outputTokenLimit": 4096,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.4,
"topP": 1,
"topK": 32
},
{
"name": "models/gemini-1.0-ultra-latest",
"version": "001",
"displayName": "Gemini 1.0 Ultra",
"description": "The most capable model for highly complex tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 32
},
{
"name": "models/gemini-1.5-pro-latest",
"version": "001",
"displayName": "Gemini 1.5 Pro",
"description": "Mid-size multimodal model that supports up to 1 million tokens",
"inputTokenLimit": 1048576,
"outputTokenLimit": 8192,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 2,
"topP": 0.4,
"topK": 32
},
{
"name": "models/gemini-pro",
"version": "001",
"displayName": "Gemini 1.0 Pro",
"description": "The best model for scaling across a wide range of tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-pro-vision",
"version": "001",
"displayName": "Gemini 1.0 Pro Vision",
"description": "The best image understanding model to handle a broad range of applications",
"inputTokenLimit": 12288,
"outputTokenLimit": 4096,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.4,
"topP": 1,
"topK": 32
},
{
"name": "models/gemini-ultra",
"version": "001",
"displayName": "Gemini 1.0 Ultra",
"description": "The most capable model for highly complex tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 32
},
{
"name": "models/embedding-001",
"version": "001",
"displayName": "Embedding 001",
"description": "Obtain a distributed representation of a text.",
"inputTokenLimit": 2048,
"outputTokenLimit": 1,
"supportedGenerationMethods": [
"embedContent"
]
},
{
"name": "models/aqa",
"version": "001",
"displayName": "Model that performs Attributed Question Answering.",
"description": "Model trained to return answers to questions that are grounded in provided sources, along with estimating answerable probability.",
"inputTokenLimit": 7168,
"outputTokenLimit": 1024,
"supportedGenerationMethods": [
"generateAnswer"
],
"temperature": 0.2,
"topP": 1,
"topK": 40
}
]
}
@songquanpeng 已经可以使用,但是名字是 gemini-1.5-pro-latest
curl 'https://generativelanguage.googleapis.com/v1beta/models?key=xxx' { "models": [ { "name": "models/chat-bison-001", "version": "001", "displayName": "PaLM 2 Chat (Legacy)", "description": "A legacy text-only model optimized for chat conversations", "inputTokenLimit": 4096, "outputTokenLimit": 1024, "supportedGenerationMethods": [ "generateMessage", "countMessageTokens" ], "temperature": 0.25, "topP": 0.95, "topK": 40 }, { "name": "models/text-bison-001", "version": "001", "displayName": "PaLM 2 (Legacy)", "description": "A legacy model that understands text and generates text as an output", "inputTokenLimit": 8196, "outputTokenLimit": 1024, "supportedGenerationMethods": [ "generateText", "countTextTokens", "createTunedTextModel" ], "temperature": 0.7, "topP": 0.95, "topK": 40 }, { "name": "models/embedding-gecko-001", "version": "001", "displayName": "Embedding Gecko", "description": "Obtain a distributed representation of a text.", "inputTokenLimit": 1024, "outputTokenLimit": 1, "supportedGenerationMethods": [ "embedText", "countTextTokens" ] }, { "name": "models/gemini-1.0-pro", "version": "001", "displayName": "Gemini 1.0 Pro", "description": "The best model for scaling across a wide range of tasks", "inputTokenLimit": 30720, "outputTokenLimit": 2048, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 0.9, "topP": 1, "topK": 1 }, { "name": "models/gemini-1.0-pro-001", "version": "001", "displayName": "Gemini 1.0 Pro 001 (Tuning)", "description": "The best model for scaling across a wide range of tasks. This is a stable model that supports tuning.", "inputTokenLimit": 30720, "outputTokenLimit": 2048, "supportedGenerationMethods": [ "generateContent", "countTokens", "createTunedModel" ], "temperature": 0.9, "topP": 1, "topK": 1 }, { "name": "models/gemini-1.0-pro-latest", "version": "001", "displayName": "Gemini 1.0 Pro Latest", "description": "The best model for scaling across a wide range of tasks. This is the latest model.", "inputTokenLimit": 30720, "outputTokenLimit": 2048, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 0.9, "topP": 1, "topK": 1 }, { "name": "models/gemini-1.0-pro-vision-latest", "version": "001", "displayName": "Gemini 1.0 Pro Vision", "description": "The best image understanding model to handle a broad range of applications", "inputTokenLimit": 12288, "outputTokenLimit": 4096, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 0.4, "topP": 1, "topK": 32 }, { "name": "models/gemini-1.0-ultra-latest", "version": "001", "displayName": "Gemini 1.0 Ultra", "description": "The most capable model for highly complex tasks", "inputTokenLimit": 30720, "outputTokenLimit": 2048, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 0.9, "topP": 1, "topK": 32 }, { "name": "models/gemini-1.5-pro-latest", "version": "001", "displayName": "Gemini 1.5 Pro", "description": "Mid-size multimodal model that supports up to 1 million tokens", "inputTokenLimit": 1048576, "outputTokenLimit": 8192, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 2, "topP": 0.4, "topK": 32 }, { "name": "models/gemini-pro", "version": "001", "displayName": "Gemini 1.0 Pro", "description": "The best model for scaling across a wide range of tasks", "inputTokenLimit": 30720, "outputTokenLimit": 2048, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 0.9, "topP": 1, "topK": 1 }, { "name": "models/gemini-pro-vision", "version": "001", "displayName": "Gemini 1.0 Pro Vision", "description": "The best image understanding model to handle a broad range of applications", "inputTokenLimit": 12288, "outputTokenLimit": 4096, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 0.4, "topP": 1, "topK": 32 }, { "name": "models/gemini-ultra", "version": "001", "displayName": "Gemini 1.0 Ultra", "description": "The most capable model for highly complex tasks", "inputTokenLimit": 30720, "outputTokenLimit": 2048, "supportedGenerationMethods": [ "generateContent", "countTokens" ], "temperature": 0.9, "topP": 1, "topK": 32 }, { "name": "models/embedding-001", "version": "001", "displayName": "Embedding 001", "description": "Obtain a distributed representation of a text.", "inputTokenLimit": 2048, "outputTokenLimit": 1, "supportedGenerationMethods": [ "embedContent" ] }, { "name": "models/aqa", "version": "001", "displayName": "Model that performs Attributed Question Answering.", "description": "Model trained to return answers to questions that are grounded in provided sources, along with estimating answerable probability.", "inputTokenLimit": 7168, "outputTokenLimit": 1024, "supportedGenerationMethods": [ "generateAnswer" ], "temperature": 0.2, "topP": 1, "topK": 40 } ] }
我在v0.6.4-aplha-1上会出错,没有配置成功,请问你在哪个版本上正常执行了?
我在v0.6.4-aplha-1上会出错,没有配置成功,请问你在哪个版本上正常执行了?
我先手动加了个模型映射
{
"gemini-1.5-pro": "gemini-1.5-pro-latest"
}
先谢啦!不过我加了模型映射,还是报错
我在v0.6.4-aplha-1上会出错,没有配置成功,请问你在哪个版本上正常执行了?
我先手动加了个模型映射
{ "gemini-1.5-pro": "gemini-1.5-pro-latest" }
加映射没用,因为地址请求变成了 v1beta
加映射没用,因为地址请求变成了 v1beta
如何修复啊?
设置环境变量 GEMINI_VERSION 即可控制 One API 所使用的版本
版本:v0.6.5-alpha.18
咋控制,不生效 v0.6.5-alpha.18 还是不行
这样还是不行
这样还是不行
不要加模型映射,手动添加一个gemini-1.5-pro-latest. 我这边成功了。
这样还是不行
你解决了吗?也是卡在gemini1.5这里
API已开放 https://ai.google.dev/models/gemini?hl=zh-cn#model-variations