32 introduces a problem: ChatLiteLLM requires an non-optional max_tokens argument, which is hard to calculate for different models, esp. models outside OpenAI. <- this I think is how the design decision is made in LangChain. In the PR, I made a PatchedLiteLLM which inherits LiteLLM, but makes max_tokens optional.
Note: This has not been extensively tested for all models supported by LiteLLM. Errors may appear when you are using new models. Please open an issue when you encounter one.
✅ Checks
[x] My pull request adheres to the code style of this project
[x] My code requires changes to the documentation
[x] I have updated the documentation as required
[x] All the tests have passed
[x] Branch name follows type/descript (e.g. feature/add-llm-agents)
📑 Description
32 introduces a problem: ChatLiteLLM requires an non-optional
max_tokens
argument, which is hard to calculate for different models, esp. models outside OpenAI. <- this I think is how the design decision is made in LangChain. In the PR, I made aPatchedLiteLLM
which inheritsLiteLLM
, but makesmax_tokens
optional.Note: This has not been extensively tested for all models supported by LiteLLM. Errors may appear when you are using new models. Please open an issue when you encounter one.
✅ Checks
type/descript
(e.g.feature/add-llm-agents
)ℹ Additional Information