Closed cheeseonamonkey closed 8 months ago
I agree that this should be implemented as an optional setting.
Low priority change for now until I have access to gpt4-32k xD
not planned - gpt4-turbo is so cheap and has so much context. individual defaults per folder (#27) is a better solution to this imo.
not planned - gpt4-turbo is so cheap and has so much context. individual defaults per folder (#27) is a better solution to this imo.
could also use https://openrouter.ai/models/openrouter/auto
Automatically set model according to settings & chat size
Proposed feature:
This would be best implemented as an optional setting, that a user could opt into. If enabled, the only models to choose from in the chat config would be:
gpt-3.5-turbo
gpt-4
The token length models could then be automatically selected only when required, determined by these factors:
gpt-3.5-turbo-16k
model if you are well under the 4000 token limit forgpt-3.5-turbo-16k
)