joshpxyne / gpt-migrate

Easily migrate your codebase from one framework or language to another.
https://gpt-migrate.com
MIT License
6.81k stars 478 forks source link

ValueError: No valid completion model args passed in #42

Open BeMain opened 11 months ago

BeMain commented 11 months ago

I'm trying to migrate a project from Fortran95 (yes, I know it's old, that's why it needs to be migrated... :grin:) to Dart, but I get the following error:

Traceback (most recent call last):

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/main.py", line 127, in <module>
    app()

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/main.py", line 87, in main
    create_environment(globals)

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/steps/setup.py", line 15, in create_environment
    llm_write_file(prompt,

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/utils.py", line 52, in llm_write_file
    file_name,language,file_content = globals.ai.write_code(prompt)[0]
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/ai.py", line 23, in write_code
    response = completion(
               ^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/utils.py", line 98, in wrapper
    raise e

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/utils.py", line 89, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/timeout.py", line 44, in wrapper
    result = future.result(timeout=local_timeout_duration)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/usr/lib64/python3.11/concurrent/futures/_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^

  File "/usr/lib64/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/timeout.py", line 35, in async_func
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/main.py", line 248, in completion
    raise exception_type(model=model, original_exception=e)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/utils.py", line 273, in exception_type
    raise original_exception # base case - return the original exception
    ^^^^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/main.py", line 242, in completion
    raise ValueError(f"No valid completion model args passed in - {args}")

It also prints all the parameters passed to the model, if you want I can give you those too.

The command I run is:

python3 main.py --sourcedir "../my/source" --sourceentry md_3PG.f95 --sourcelang fortran95 --targetlang dart --targetdir "../my/target"

I couldn't find anyone else had encountered the same problem, so I thought I might open an Issue.

I have tried a few different models, but that doesn't seem to make a difference; all of them throw the same error. Except when using the model "gpt-3.5-turbo", then it insteads complains that the model's maximum context length has been reached.

mpmnath commented 11 months ago

Same issue here. I think you need to upgrade your API plan to use GPT-4 model. Readme file says GPT-4-32K is preferred

christian-ek commented 11 months ago

Getting the same problem, I only have access to gpt-4 and not gpt-4-32k and no way to upgrade from what I can see.

mpmnath commented 11 months ago

I was able to at least get this running now. but there is a rate limit error and the execution stops abruptly šŸ˜ž

BeMain commented 11 months ago

Oh, so that's what the error message means; that the model isn't available? That might be it, as using gpt-3.5-turbo or gpt-4 doesn't cause the error (but instead complains about token length). Strange, I tried this a few weeks back and think I got the error that the model isn't available (when trying to use gpt-4-32k) back then too, only I remember the error message as being much clearer. Maybe things have changed recently.

I too only have access to gpt-4, so perhaps the only option is waiting for #2 then?

BeMain commented 11 months ago

I was able to at least get this running now. but there is a rate limit error and the execution stops abruptly šŸ˜ž

What did you do different to get it to work?

mpmnath commented 11 months ago

I was able to at least get this running now. but there is a rate limit error and the execution stops abruptly šŸ˜ž

What did you do different to get it to work?

used gpt-4 model. Since there is a rate limit for gpt-4 unlike gpt-4-32K, I used try and catch for all API calls in utils.py file to retry the request.

def llm_run(prompt,waiting_message,success_message,globals):

    output = ""
    with yaspin(text=waiting_message, spinner="dots") as spinner:
        try:
            output = globals.ai.run(prompt)
        except openai.error.RateLimitError as e:
            retry_time = e.retry_after if hasattr(e, 'retry_after') else 30
            time.sleep(retry_time)
            output = globals.ai.run(prompt)  # Retry the API request
        spinner.ok("āœ… ")

    if success_message:
        success_text = typer.style(success_message, fg=typer.colors.GREEN)
        typer.echo(success_text)

    return output
krrishdholakia commented 11 months ago

Hey @BeMain @mpmnath @christian-ek

I'm the co-maintainer of litellm. We should have returned a better error message here. That error means that no valid model was found.

Can you show me the code to repro this problem? What was the model name that got passed in

BeMain commented 11 months ago

The default model for gpt-migrate is gpt-4-32k. That was the model throwing the error, because I don't have access to it. If that's what you mean?

I'm not sure about the underlying code; I haven't checked the source myself. But this repo in combination with the stack trace I provided should be enough to figure out what exactly was throwing the error...I think? Let me know if there's something more you need.

krrishdholakia commented 11 months ago

Sounds good @BeMain - will follow up

stridera commented 7 months ago

Was there any update to this? It doesn't like gpt-4 at all, but gpt-4-turbo-preview seems to work but the tokens are incorrect.

gpt-4 openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.

gpt-4-turbo-preview ValueError: No valid completion model args passed in - {'model': 'gpt-4-turbo-preview', 'messages': [{'role': 'user', 'content': '...'}], 'functions': [], 'function_call': '', 'temperature': 0.0, 'top_p': 1, 'n': 1, 'stream': False, 'stop': None, 'max_tokens': 10000, 'presence_penalty': 0, 'frequency_penalty': 0, 'logit_bias': {}, 'user': '', 'force_timeout': 60, 'azure': False, 'logger_fn': None, 'verbose': False, 'optional_params': {'temperature': 0.0, 'max_tokens': 10000}}

I'll start updating args based on the openai API.

st9824-att commented 5 months ago

I just downloaded this last week and starting setting up. I'm using gpt-4-32k and currently stuck on this error. How do I fix it?

krrishdholakia commented 5 months ago

Hey - we should have this fixed on our end (i.e. raising the right error here). I also don't see this line ValueError: No valid completion model args passed in in litellm's codebase anymore - https://github.com/search?q=repo%3ABerriAI%2Flitellm+%22ValueError%3A+No+valid+completion+model+args+passed+in%22&type=code

st9824-att commented 5 months ago

Interesting. I wonder if this is coming from one of the python libraries. Since I just downloaded the gpt migrate last week, I should have the latest code.

ā”‚ 270 ā”‚ ā”‚ raise InvalidRequestError(f"CohereException - {error_str}", f"{model}") ā”‚ ā”‚ 271 ā”‚ ā”‚ elif "CohereConnectionError" in exception_type: # cohere seems to fire these err ā”‚ ā”‚ 272 ā”‚ ā”‚ raise RateLimitError(f"CohereException - {original_exception.message}") ā”‚ ā”‚ > 273 ā”‚ raise original_exception # base case - return the original exception ā”‚ ā”‚ 274 ā”‚ else: ā”‚ ā”‚ 275 ā”‚ raise original_exception ā”‚ ā”‚ 276 ā”‚ ā”‚ ā”‚ ā”‚ ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ locals ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” ā”‚ ā”‚ ā”‚ error_str = "No valid completion model args passed in - {'model': ā”‚ ā”‚ ā”‚ ā”‚ 'openrouter/openai/gpt-4-32"+3349 ā”‚ ā”‚ ā”‚ ā”‚ exception_type = 'ValueError' ā”‚ ā”‚ ā”‚ ā”‚ model = 'openrouter/openai/gpt-4-32k' ā”‚ ā”‚ ā”‚ ā”‚ original_exception = ValueError('No valid completion model args passed in - {\'model\':

krrishdholakia commented 5 months ago

that looks like litellm but i don't think it's our latest version

Based on the requirements.txt i can see it's on a pretty old version - https://github.com/joshpxyne/gpt-migrate/blob/508c3489f43fb8045e846560322b98acac89c909/gpt_migrate/requirements.txt#L6

Can you try this snippet on our latest version (v1.34.35) and let me know if you see it

#pip install litellm==1.34.35

from litellm import completion
import os

## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-4-32k", messages=messages)

print(response)
hannesaasamets commented 1 month ago

Thank you @krrishdholakia! Upgrading litellm did the trick and it got past the "ValueError: No valid completion model args passed in" error.

I used your snippet for gpt-4o now and it worked (didn't work for gpt-4-32k).

krrishdholakia commented 1 month ago

hey @hannesaasamets what was the issue you saw for gpt-4-32k, they follow the same codepath, so i'm curious what happened