pkoukk / tiktoken-go

go version of tiktoken
MIT License
601 stars 67 forks source link

计算的token值不准确 #6

Closed chenxi2015 closed 1 year ago

chenxi2015 commented 1 year ago

我的prompt

golang, Please use Markdown syntax to reply

model: gpt-3.5-turbo encoding: cl100k_base

计算的结果是:9个 但是实际是:17个 接口报错:

This model's maximum context length is 4097 tokens. However, you requested 4104 tokens (17 in the messages, 4087 in the completion). Please reduce the length of the messages or completion.

代码如下:

func (g *GPT) getTikTokenByEncoding(prompt string) (int, error) {
    encoding := g.getAvailableEncodingModel(Model)
    g.App.LogInfo("encoding: ", encoding)
    tkm, err := tiktoken.GetEncoding(encoding)
    if err != nil {
        return 0, err
    }
    token := tkm.Encode(prompt, nil, nil)
    return len(token), nil
}

请问如何解决?

pkoukk commented 1 year ago

参照官方的token计算cookbook https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb 第六节 6. Counting tokens for chat API calls message中的role也会计算token,并且每次请求都会固定额外消耗3个token。 可以按照它的计算方式重新计算试试

chenxi2015 commented 1 year ago

参照官方的token计算cookbook https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb 第六节 6. Counting tokens for chat API calls message中的role也会计算token,并且每次请求都会固定额外消耗3个token。 可以按照它的计算方式重新计算试试

好的 我试一下

pkoukk commented 1 year ago

同时官方也在cookbook里指出 Consider the counts from the function below an estimate, not a timeless guarantee. 这是一个估算值,并不代表完全准确 不过就我目前使用的情况来看,出入不大,个人建议不要在请求里把token数量卡的太死问题就不大

chenxi2015 commented 1 year ago

同时官方也在cookbook里指出 Consider the counts from the function below an estimate, not a timeless guarantee. 这是一个估算值,并不代表完全准确 不过就我目前使用的情况来看,出入不大,个人建议不要在请求里把token数量卡的太死问题就不大

好的

nasa1024 commented 1 year ago

如果你用的是 github.com/sashabaranov/go-openai 可以使用以下方法

package pixar // change package name!!!

import (
    "fmt"

    "github.com/pkoukk/tiktoken-go"
    "github.com/sashabaranov/go-openai"
)

// NumTokensFromMessages
func NumTokensFromMessages(messages []openai.ChatCompletionMessage, model string) (num_tokens int) {
    tkm, err := tiktoken.EncodingForModel(model)
    if err != nil {
        err = fmt.Errorf("EncodingForModel: %v", err)
        fmt.Println(err)
        return
    }

    var tokens_per_message int
    var tokens_per_name int
    if model == "gpt-3.5-turbo-0301" || model == "gpt-3.5-turbo" {
        tokens_per_message = 4
        tokens_per_name = -1
    } else if model == "gpt-4-0314" || model == "gpt-4" {
        tokens_per_message = 3
        tokens_per_name = 1
    } else {
        fmt.Println("Warning: model not found. Using cl100k_base encoding.")
        tokens_per_message = 3
        tokens_per_name = 1
    }

    for _, message := range messages {
        num_tokens += tokens_per_message
        num_tokens += len(tkm.Encode(message.Content, nil, nil))
        num_tokens += len(tkm.Encode(message.Role, nil, nil))
        if message.Name != "" {
            num_tokens += tokens_per_name
        }
    }
    num_tokens += 3
    return num_tokens
}

// GetTokenByEncoding
func GetTokenByModel(text string, model string) (num_tokens int) {

    tkm, err := tiktoken.EncodingForModel(model)
    if err != nil {
        err = fmt.Errorf(": %v", err)
        return
    }

    token := tkm.Encode(text, nil, nil)

    return len(token)
}

这样调用它

usage.PromptTokens = pixar.NumTokensFromMessages(req.Messages, openai.GPT4)
nasa1024 commented 1 year ago

@pkoukk 考虑下需不需要加入到readme.md中

pkoukk commented 1 year ago

@pkoukk 考虑下需不需要加入到readme.md中

已添加