paul-gauthier / aider

aider is AI pair programming in your terminal
https://aider.chat/
Apache License 2.0
18.73k stars 1.73k forks source link

Used deepseek-coder-v2 show Litellm error #874

Closed ASHCBY closed 2 months ago

ASHCBY commented 2 months ago

Issue

Why I always show LiteLLM:ERROR: litellm_logging.py:1265 - Model=deepseek-coder-v2 not found in completion cost map. Setting 'response_cost' to None?

Version and model info

No response

paul-gauthier commented 2 months ago

Thanks for trying aider and filing this issue.

When reporting problems, it is very helpful if you can provide:

Including the “announcement” lines that aider prints at startup is an easy way to share some of this helpful info.

Aider v0.37.1-dev
Models: gpt-4o with diff edit format, weak model gpt-3.5-turbo
Git repo: .git with 243 files
Repo-map: using 1024 tokens
ASHCBY commented 2 months ago

Aider v0.44.0 Model: ollama/deepseek-coder-v2 with whole edit format Git repo: .git with 329 files Repo-map: disabled Use /help for help, run "aider --help" to see cmd line args

When agent finished show this , and show: "Allow creation of new file game/main_game.py? y " and agent stop continue , I need press enter to continue

AlexJ-StL commented 2 months ago

I'm actually having a similar issue in that the error seems to be LiteLLM, but the error is different. The error occurred when attempting to use gemini-1.5-pro-latest. The issue started yesterday when I switched from Claude 3.5 Sonnet to Gemini. It allowed me to load the model, but when I attempted to use it all hell broke loose. I use Claude, Gemini, several from Groq, and Firework on a daily basis, so I know that its not my API keys or their respective servers.

I am doing some troubleshooting now and will update if I figure it out. Being as most issues with my projects come down to environments and/or (normally and) Python packages/dependencies, so I will start there. If anyone has any suggestions I am all ears.

I have included all the info requested below including the error.

""" Aider v0.43.5-dev Models: claude-3-5-sonnet-20240620 with diff edit format, weak model claude-3-haiku-20240307 Git repo: .git with 26 files Repo-map: using 1024 tokens """

I was notified of the newest version, so I upgraded v0.43.5-dev to v0.45.1. Closed and reopened PowerShell. Then spooled up aider and loaded the Gemini 1.5 Pro model.

"""

/model gemini/gemini-1.5-pro-latest**

Model gemini-1.5-pro-latest: Unknown which environment variables are required. Model gemini-1.5-pro-latest: Unknown context window size and costs, using sane defaults. Did you mean one of these? - gemini-1.5-pro-latest

Aider v0.45.1 Model: gemini/gemini-1.5-pro-latest with diff edit format Git repo: .git with 26 files Repo-map: using 1024 tokens """

Did I mean the model I entered? Um... Yes... Slightly confused I decided to run a Groq model to see if it had problems too...

"""

aider --models groq/

litellm.APIConnectionError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=gemini-1.5-pro-latest Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers Traceback (most recent call last): File "C:...\Python311\Lib\site-packages\litellm\main.py", line 815, in completion model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider( ^^^^^^^^^^^^^^^^^ File "C:...\Python311\Lib\site-packages\litellm\utils.py", line 4386, in get_llm_provider raise e File "C:...\Python311\Lib\site-packages\litellm\utils.py", line 4363, in get_llm_provider raise litellm.exceptions.BadRequestError( # type: ignore litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=gemini-1.5-pro-latest Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers """

It then repeated a bunch of times until this new message...

""" During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:...\Python311\Lib\site-packages\aider\coders\base_coder.py", line 858, in send_new_user_message yield from self.send(messages, functions=self.functions) File "C:...\Python311\Lib\site-packages\aider\coders\base_coder.py", line 1110, in send hash_object, completion = send_with_retries( ^^^^^^^^^^^^^^^^^^ File "C:...\Python311\Lib\site-packages\aider\sendchat.py", line 53, in wrapper return decorated_func(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:...\Python311\Lib\site-packages\backoff_sync.py", line 105, in retry ret = target(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:...\Python311\Lib\site-packages\aider\sendchat.py", line 81, in send_with_retries res = litellm.completion(kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:...\Python311\Lib\site-packages\litellm\utils.py", line 1001, in wrapper raise e File "C:...\Python311\Lib\site-packages\litellm\utils.py", line 881, in wrapper result = original_function(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:...\Python311\Lib\site-packages\litellm\main.py", line 2605, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "C:...\Python311\Lib\site-packages\litellm\utils.py", line 7650, in exception_type raise e File "C:...\Python311\Lib\site-packages\litellm\utils.py", line 7614, in exception_type raise APIConnectionError(litellm.exceptions.APIConnectionError: litellm.APIConnectionError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=gemini-1.5-pro-latest Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers """

And here we go again...

""" Traceback (most recent call last): File "C:...\Python311\Lib\site-packages\litellm\main.py", line 815, in completion model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider( ^^^^^^^^^^^^^^^^^ File "C:...\Python311\Lib\site-packages\litellm\utils.py", line 4386, in get_llm_provider raise e File "C:...\Python311\Lib\site-packages\litellm\utils.py", line 4363, in get_llm_provider raise litellm.exceptions.BadRequestError( type: ignore litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=gemini-1.5-pro-latest Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers """

It looked like it was re-entering the loop, so I decided to terminate the task.

AlexJ-StL commented 2 months ago

After some work I got Groq/ Llama3-70b to work, which is normally not an issue. Then I got 1 prompt and response partially in before I got "litellm.RateLimitError: RateLimitError: GroqException - Error code: 429 - {'error': {'message': 'Rate limit reached for model llama3-70b-8192 in organization org_01ht7j91w5fjx8yabp2xh9hq37 on tokens per minute (TPM): Limit 6000, Used 0, Requested 7259. Please try again..."

paul-gauthier commented 2 months ago

This looks like a duplicate of #883, so I'm going to close it so discussion can happen there. Please let me know if you think it's actually a distinct issue.