jupyterlab / jupyter-ai

A generative AI extension for JupyterLab
https://jupyter-ai.readthedocs.io/
BSD 3-Clause "New" or "Revised" License
3.2k stars 324 forks source link

jupyter_ai_magics.providers.ChatOpenAIProvider() got multiple values for keyword argument 'max_tokens' #624

Closed EduardDurech closed 8 months ago

EduardDurech commented 8 months ago

There is an error when setting max_tokens for openai-chat in the jupyter_jupyter_ai_config.json

Error

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/jupyter_ai/chat_handlers/base.py", line 113, in on_message
    await self.process_message(message)
  File "/usr/local/lib/python3.10/dist-packages/jupyter_ai/chat_handlers/default.py", line 53, in process_message
    self.get_llm_chain()
  File "/usr/local/lib/python3.10/dist-packages/jupyter_ai/chat_handlers/base.py", line 196, in get_llm_chain
    self.create_llm_chain(lm_provider, lm_provider_params)
  File "/usr/local/lib/python3.10/dist-packages/jupyter_ai/chat_handlers/default.py", line 27, in create_llm_chain
    llm = provider(**provider_params, **model_parameters)
TypeError: jupyter_ai_magics.providers.ChatOpenAIProvider() got multiple values for keyword argument 'max_tokens'

https://github.com/jupyterlab/jupyter-ai/blob/8ea0ff34335ab530d8c22a80602ce5668d5b85e6/packages/jupyter-ai/jupyter_ai/chat_handlers/base.py#L204-L209

Steps:

Parametres are successfully loaded [I FooDate AiExtension] Configured model parameters: {'openai-chat:gpt-4-1106-preview': {'max_tokens': 4095}}

Searching the codebase I cannot find that max_tokens is set anywhere and https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.openai.ChatOpenAI.html does not set

welcome[bot] commented 8 months ago

Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! :hugs:
If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively. welcome You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! :wave:
Welcome to the Jupyter community! :tada:

JasonWeill commented 8 months ago

@EduardDurech Thanks for opening this! I was able to reproduce this by passing a similar override on the command line using the tip of the main branch and JupyterLab 4.1.0.

jupyter lab --AiExtension.model_parameters openai-chat:gpt-4-1106-preview='{"max_tokens": 4095}'

DzmitrySudnik commented 8 months ago

I'm getting the same error with Bedrock-chat models, in fact even giving some nonsense like --AiExtension.model_parameters='bedrock-chat:anthropic.claude-v2={"anything":200}' generates the same error, but complains about got multiple values for keyword argument anything

JasonWeill commented 8 months ago

It looks like the parameter is being ingested into both provider_params and model_parameters. I logged the value of each variable immediately before instantiating llm in jupyter_ai/chat_handlers/default.py, and I found the same custom model parameter in both objects:

[I 2024-02-07 15:38:23.871 AiExtension] provider_params: {'model_id': 'gpt-4-1106-preview', 'max_tokens': 4095, 'openai_api_key': 'secret'}
[I 2024-02-07 15:38:23.871 AiExtension] model_parameters: {'max_tokens': 4095}

This is true whether the parameter name is max_tokens or anything, or anything else.

EduardDurech commented 8 months ago

Not sure if relevant but I realized that it seemed to autosave to the ~/.local/share/jupyter/jupyter_ai/config.json whatever I gave as the argument under a field called "Fields" or something (I cannot access the server to check right now, sorry about that)