Closed gitLinan closed 1 year ago
Can you provide me with the examples that differed in token count? I would assume that Azure OpenAI uses the same model and encodings under the hood, but I have not used Azure myself
I'm sorry it's my problem. When Azure is encapsulated, the input of "prompt" needs to include the JSON calculation of the system's role, which causes inconsistency between the tokens calculated locally and the actual consumption.
I use Azure's OpenAI-api service, the model is GPT-35, there is an error between the result of the token calculation of the question and answer and the result returned by the API service