Closed anlek closed 9 months ago
This might fix the issue reported on issue #92.
This might fix the issue reported on issue #92.
I don't think it's directly related. I believe #92 is caused by a race condition where the rubberduck ui panel seems to need to be in focus for things to generate correctly. Additionally, I have only been using gpt-4
and gpt-3.5-turbo
in my configuration.
I would also like to specify the exact model to use and do what to use it with gpt-3.5-turbo-16k but because the setting is a drop down it doesn't seem to let me type my own in even if I try to force it like above.
v1.17
supports gpt-4-32k
and gpt-3.5-turbo-16k
now.
Is this request related to a problem? Please describe.
I've only started playing with this extension, however, I seem to hit the max context length.
Describe the solution you'd like
I'd love to be able to use the 16k version (or support auto selecting the 16k if input is large)
Additional context
I've tried to force the setting for
"rubberduck.model": "gpt-3.5-turbo-16k"
however it errors saying I'm not allowed to select a model not on the provided list (gpt-3.5-turbo
orgpt-4
).