transitive-bullshit / agentic

AI agent stdlib that works with any LLM and TypeScript AI SDK.
https://agentic.so
MIT License
16.38k stars 2.14k forks source link

Fix prompt length calculation #548

Closed alxmiron closed 4 months ago

alxmiron commented 1 year ago

Current _buildMessages() calculates tokens in prompt in a wrong way - there's a small difference with a usage amount that comes from ChatGPT (in non-stream mode). I fix it, picking the logic from here Now our estimated numTokens should be equal to message.detail.usage.prompt_tokens

zhujunsan commented 1 year ago

Better to check the model and then calc. gpt3.5 and 4 seems calc differently.

zhujunsan commented 1 year ago

I also make the change in #546, but your code looks better 😀

zhujunsan commented 1 year ago

after test, at least in gpt-3.5, tokens_per_message is 5. I think it's one missing \n that's not calculated

zhujunsan commented 11 months ago

Is there anyone to merge or review this pr?

transitive-bullshit commented 4 months ago

This project is undergoing a major revamp; closing out old PRs as part of the prep process.

Sorry I never got around to reviewing this PR. The chatgpt package is pretty outdated at this point. I recommend that you use the openai package or the openai-fetch package.