When calling this library many times (for instance using it to split text into parts), it appears the internal encodings aren't re-used and there isn't any way to do it.
The result is it's quite slow:
import { GPTTokens } from 'gpt-tokens'
// import { GPTTokens } from '../src/libs/gptTokens.js'
for (let i = 0; i < 1000; i++) {
console.time('GPTTokens')
const usageInfo = new GPTTokens({
plus: false,
model: 'gpt-3.5-turbo-0613',
messages: [
{
role: 'user',
content: 'Hello world',
},
],
})
usageInfo.usedTokens
usageInfo.promptUsedTokens
usageInfo.completionUsedTokens
usageInfo.usedUSD
console.timeEnd('GPTTokens')
}
When calling this library many times (for instance using it to split text into parts), it appears the internal encodings aren't re-used and there isn't any way to do it.
The result is it's quite slow:
Returns:
When the encodings are cached in the module: