Closed Zabrell closed 1 year ago
🤖
Thank you for bringing this issue to our attention! It seems like there is an error with the get_num_tokens_from_messages()
function when using the gpt-3.5-turbo-0613
model in LangChain.
Based on your description, it looks like you have identified a potential solution to the problem by modifying the code in openai.py
. We appreciate your effort in proposing a fix!
To contribute your solution to LangChain, you can create a pull request (PR) with your code changes. The maintainers will review your proposed fix and decide if it should be merged into the project.
Please follow the contribution guidelines of LangChain and submit your PR with the proposed code changes. Your contribution will help improve the framework for everyone using it.
Thank you for your willingness to contribute to LangChain! Let us know if you have any further questions or need any assistance with the contribution process.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Sorry, my version of the library was older, there is no such error in the new one
System Info
Windows(server on linux), python, poetry
Who can help?
@hwchase17 @agola11
Information
Related Components
Reproduction
Langchain has problems with working with model casts
For example, when working with gpt-3.5-turbo-0613, he is not able to count the number of tokens, because the moment of checking the model is hard-coded Please correct the following in def get_num_tokens_from_messages(langchain\chat_models\openai.py)":
Replase:
Expected behavior
Sorry, because the project is closed, there is no way to provide sources, but the problem arises when using blind models and when calculating the number of tokens