danny-avila / LibreChat

Enhanced ChatGPT Clone: Features Anthropic, AWS, OpenAI, Assistants API, Azure, Groq, o1, GPT-4o, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message search, langchain, DALL-E-3, ChatGPT Plugins, OpenAI Functions, Secure Multi-User System, Presets, completely open-source for self-hosting. Actively in public development.
https://librechat.ai/
MIT License
17.93k stars 2.98k forks source link

[Bug]: Google <> Anthropic Params Conflict with Mid-convo Switch #2778

Closed TheValkyrja closed 2 months ago

TheValkyrja commented 4 months ago

What happened?

when using switching Endpoints mid-conversation function after generating a response with a model, the parameter didn't dynamic change to new endpoint's default value. for example, max context will remain 8192 when changing gemini-1.5-pro to claude-3-opus(so as other parameter, top P, top K etc.), and this will cause error.

Steps to Reproduce

1.use Gemini-1.5-flash-latest, with its default parameter value: (temperature 0.2, top P 0.8, top K 40 and max output tokens 8192) to generate a response. 2.change the conversation model to Anthropic Claude-3-Opus with out setting parameter value. 3.model returned with an error message says unsupported value.

What browsers are you seeing the problem on?

Chrome

Relevant log output

No response

Screenshots

image image as you can see, this is the default value for gemini, but wrongly applied to claude.

Code of Conduct

danny-avila commented 4 months ago

Thanks, when the params are "shared," the previous values are allowed mid-conversation but in this case, they are only shared in name, and some discrepancy needs to be made for a more seamless experience.

dtgagnon commented 2 months ago

This appears to be the case for mid-convo switching from OpenAI (only tested gpt-4o) initiated conversations that switch to Anthropic models, as well.