j0rd1smit / obsidian-copilot-auto-completion

MIT License
128 stars 12 forks source link

Support GPT-4 Turbo #13

Closed WilliamsJack closed 11 months ago

WilliamsJack commented 11 months ago

GPT-4 Turbo is a newer model with more capability and considerably reduced cost over GPT-4 (Although still higher than GPT-3.5). In my testing, it works well for autocompletion, so I believe it should be made available as an option.

j0rd1smit commented 11 months ago

(Sorry for the late response.) It seems to be working all well and good. However, I have been thinking about it, and I think the drop-down approach is no longer valid. Openai's model list keeps increasing, and we will have to keep updating it every time there is a release. So, let's just replace the dropdown list with a text field. That way, the user can decide what they want there.

If you still want the credits and kudo for this PR feel free to change the code to:

<TextSettingItem
                        name={"Model"}
                        description={"he openai model that will be queried. At the moment only gpt-3.5-turbo is supported."}
                        placeholder={"gpt-3.5-turbo"}
                        value={settings.openAIApiSettings.model}
                        errorMessage={errors.get("openAIApiSettings.model")}
                        setValue={(value: string) =>
                            updateSettings({
                                openAIApiSettings: {
                                    ...settings.openAIApiSettings,
                                    model: value,
                                },
                            })
                        }
                    />

I have you don't have time I can do it in a separated PR this weekend

WilliamsJack commented 11 months ago

I was thinking that too, although I don't think a dropdown would have to be updated that frequently - eventually they'll set alias for the latest version of GPT-4 Turbo like they do for the latest version of GPT-3.5

However, I think a text field is still a better option as it opens the door for further configurability in the future, such as using custom models or fine tuning. I'll push up a commit now to change that!