-
**Describe the solution you'd like**
Google has enabled a new model which is expertiamental yet "Gemini 1.5 Pro Experimental". Is this new model planned to be incorpoated?
I get the following error …
-
I am doing an experiment on a few LLMs including GPT-4o, GPT-4, GPT-3.5, and Claude-3. I am getting this error repeatedly when I call the GPT models (all three of them).
fastapi_poe.client.BotErro…
-
### Description of the bug:
The `response.text` quick accessor only works when the response contains a valid `Part`, but none was returned. Check the `candidate.safety_ratings` to see if the resp…
-
### What happened?
Issue happens when use litellm(litellm==1.43.18) with gemini hosted by Google in dAnswer.
And from my analysis, it is regarding to these snippets.
For LLM configuration, api_ba…
-
### Description of the bug:
All Google Gemini models that can generate text and support function calling works well if tools are not provided but when tools provided, they don't answer a simple quer…
-
### How are you running AnythingLLM?
Not listed
### What happened?
Using Railway instance, with Gemini Pro 1.5 (1M context window).
I am trying to maximize the advantage of the context window by…
-
### Problem Description
![image](https://github.com/user-attachments/assets/9758cf3e-85aa-4252-9cde-11b86a487117)
CUSTOM_MODELS=-all,+gpt-3.5-turbo,+gpt-4,+gpt-4o,+gemini-1.5-pro,+claude-3-5-sonne…
-
atb29 updated
3 months ago
-
### Issue
Why I always show LiteLLM:ERROR: litellm_logging.py:1265 - Model=deepseek-coder-v2 not found in completion cost map. Setting 'response_cost' to None?
### Version and model info
_No respon…
-
I try to use Koboldcpp's OpenAI compatible API in the Custom Local (OpenAI format) section, but it is not working. I input the model name, protocol and the port number. Please let me know if you need …