-
### Description of the bug:
Hi,
i am using Function calling along with generationConfig and then i am getting below error message.
How can we use function calling tools along with GenerationCon…
-
### 📦 部署方式
Docker
### 📌 软件版本
2.13.0
### 💻 系统环境
Other Linux
### 📌 系统版本
docker
### 🌐 浏览器
Chrome
### 📌 浏览器版本
latest
### 🐛 问题描述
更新后,谷歌的gemini无法使用了,详情如图
![image](htt…
-
Hi!
Thanks for making this great client for Gemini (and friends)! I am experiencing some issues with using Lagrange on a device with a very small screen: A pinephone pro running Linux v6.4.1-1-danc…
-
I enabled `Enables optimization guide on device` and `Prompt API for Gemini Nano` flags on Chrome Canary.
Then running `await window.ai.canCreateTextSession()` returned `no`.
Device info: MacBook …
-
When testing locally I get this error:
`PHP Fatal error: Uncaught RuntimeException: Gemini API operation failed: operation=models/gemini-pro:generateContent, status_code=400, response={
"error…
-
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too oft…
-
### Issue
Because web-based LLMs provide more quotas then via APIs, like Chatgpt, Gemini, etc. And they are cheaper in this way.
### Version and model info
Aider v0.40.6
Model: gemini/gemini-1.5-p…
-
我使用git pull -p拉取了最新分支,尝试了`docker compose down && docker compose pull &&docker compose up -d`指令并手动向config.toml文件添加了新的模型信息,但是在chatbox中尝试使用gpt-4o-mini模型时提示“Not supported by this Model”,把模型改回got-4o即可正常输出。…
-
I am creating 2 different instances of Gemma2B model, one for llm and the other for mm_llm. The world model is working fine but the action model gives me the message I mentioned at the title. Here is…
-
### What happened?
Using gemini 1.5 pro via router module:
{
"model_name": "gemini-1.5-pro-2m,
"litellm_params": {
"model": "gemini/gemini-1.5-pro",
"api_key": os.geten…