Closed finom closed 11 months ago
Yes, this library will support it in v1.2.0,During this time I'm developing and testing
@Cainier while you develop it can you give me a clue how can I do it by myself?
Who still wondering how to do that there is a snippet of code that you can run once when server is started to get real usage of tokens by functions.
void (async () => {
const result = await createCompletion({
model: MODEL,
messages: [{ content: '', role: 'user' }],
stream: false,
functionCall: 'auto',
temperature: 0,
});
console.log('result', result.fullResponse?.usage?.prompt_tokens);
})();
Yes, this library will support it in v1.2.0,During this time I'm developing and testing
How's it going?
I added it in 1.2.0, Using the npm package openai-chat-tokens
https://hmarr.com/blog/counting-openai-tokens/
But it has not been rigorously tested
you can use it in v1.2.0 like this
import { GPTTokens } from 'gpt-tokens'
const usageInfo = new GPTTokens({
model : 'gpt-3.5-turbo-1106',
messages: [
{ role: 'user', content: 'What\'s the weather like in San Francisco and Paris?' },
],
tools : [
{
type : 'function',
function: {
name : 'get_current_weather',
description: 'Get the current weather in a given location',
parameters : {
type : 'object',
properties: {
location: {
type : 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit : {
type: 'string',
enum: ['celsius', 'fahrenheit'],
},
},
required : ['location'],
},
},
},
]
})
console.info('Used tokens: ', usageInfo.usedTokens)
console.info('Used USD: ', usageInfo.usedUSD)
OpenAI models now support what is called function calling.
Is there a chance that this library is going to support it? If no can you please let me know how should I get # of tokens for the functions. Thank you.