lm-sys / RouteLLM

A framework for serving and evaluating LLM routers - save LLM costs without compromising quality!
Apache License 2.0
3.12k stars 235 forks source link

litellm.drop_params error when running the openapi server #63

Open masaruduy opened 1 week ago

masaruduy commented 1 week ago

I'm running the server normally with: python -m routellm.openai_server --routers mf --weak-model ollama_chat/codeqwen and am getting this whenever I attempt a prompt:

File "C:\Users\nate\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\litellm\utils.py", line 3586, in get_optional_params _check_valid_arg(supported_params=supported_params) File "C:\Users\nate\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\litellm\utils.py", line 3060, in _check_valid_arg raise UnsupportedParamsError( litellm.exceptions.UnsupportedParamsError: litellm.UnsupportedParamsError: ollama_chat does not support parameters: {'presence_penalty': 0.0}, for model=codeqwen. To drop these, set litellm.drop_params=True or for proxy:

litellm_settings: drop_params: true

I've tried modifying the yaml to no avail. Please help!

masaruduy commented 1 week ago

oh and am trying to use it with continue.dev { "model": "router-mf-0.11593", "title": "routellm", "completionOptions": {}, "apiBase": "http://192.168.0.215:6060/v1", "apiKey": "no_api_key", "provider": "openai" },