Closed xmoiduts closed 5 months ago
@xmoiduts is attempting to deploy a commit to the ShipBit Team on Vercel.
A member of the Team first needs to authorize it.
The SlickGPT implementation refers to an older and outdated way how OpenAI specified and calculated model windows. It was once correct but isn't anymore. Thanks for fixing this, it has been on my list for a while but I didn't get to it.
If you remove your mods, I can merge your PR so that you're listed as a contributor. Or I can take your changes and create another PR - as you wish.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
slickgpt | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Apr 9, 2024 10:11pm |
closed in favor of your other PR that was already merged. Thank you!
[Do not directly merge, it contains my diverged modification]
The [tokens left] part shown at the left bottom of the chat session is not correct. Simply changing
OpenAiModelStats.maxTokens
will and will not work.The issue comes from multiplexing of the concept "max tokens that a model can reply (often 4096 now)" and "the max context tokens applicable by a model (16k, 128k,
and even 200k (claude))".This commit aims at splitting them.