Closed Prince-Mendiratta closed 1 year ago
🤖
Hello,
Thank you for reaching out and providing detailed information about the issue you're facing.
The error message "Failed to calculate number of tokens, falling back to approximate count" is thrown when the encodingForModel(modelName)
function fails to calculate the number of tokens in the prompt. This could be due to a variety of reasons such as an unsupported model name or an issue with the js-tiktoken
library.
The maxTokens: -1
setting you're using is not directly related to this error. The maxTokens
value is calculated by subtracting the number of tokens in the prompt from the model context size, which is determined by the getModelContextSize(modelName)
function. If the maxTokens
value is negative, it means that the prompt contains more tokens than the model can handle. However, this is not the cause of the error you're seeing. The error is specifically related to the calculation of the number of tokens in the prompt.
In the context shared, the maxTokens
value is set to -1
in the invocationParams
function of the OpenAI
class. If maxTokens
is -1
and there are multiple inputs, an error is thrown. If there is only one input, the calculateMaxTokens
function is called to calculate the maximum number of tokens for the prompt. This function uses the encodingForModel(modelName)
function to calculate the number of tokens in the prompt. If this function fails, the error you're seeing is thrown.
I found a similar issue in the langchainjs repository: Failed to calculate number of tokens with tiktoken. The suggested solution is to upgrade LangchainJS to version 0.0.76 or newer, which replaces the WASM tiktoken
package with a pure-JS version.
Please try upgrading your LangchainJS version and let me know if the issue persists. If it does, we can further investigate the problem.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dqbd can you have a look at this one?
Should be fixed!
Hello team!
Thank you for the project, this is super helpful.
I was trying out the latest
gpt-3.5-turbo-instruct
model and was facing this error. This is the code I used:This throws the error and the request does not go through:
Do note that this only happens when we use
maxTokens: -1
. I believe this might be an issue withjs-tiktoken
.