Closed raphaelmosaic closed 20 hours ago
Thanks for reporting this @raphaelmosaic.
https://community.openai.com/t/why-was-max-tokens-changed-to-max-completion-tokens/938077/3
Looks like the new o1 series models (like o1-preview and o1-mini) use max_completion_tokens
instead of max_tokens
. We'll work on updating Jan to handle this smoothly.
For now, here's a quick workaround:
models/o1-mini/model.json
file (and o1-preview
if you use that too)max_tokens
to max_completion_tokens
This should get you up and running. We'll be pushing an update soon to handle this automatically.
Changed this, still does not work. Please fix this properly asap as I am 100% one of many people who would use o1mini / o1preview via Jan and have this issue
Build containing bug fixes: https://github.com/janhq/jan/actions/runs/11007851785
Jan version
v0.5.4-640
Describe the Bug
When making a request using o1-preview or o1-mini, getting:
Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead. Jan’s in beta. Access troubleshooting assistance now.
Cannot change these model parameters settings.
Steps to Reproduce
No response
Screenshots / Logs
What is your OS?