Open sbromberger opened 2 months ago
PS: I forgot to add - this LSP server is amazing. Thank you for making it!
the files
https://github.com/leona/helix-gpt/blob/master/src/providers/openai.ts and https://github.com/leona/helix-gpt/blob/master/src/providers/github.ts
still reference gpt-4 in this block:
const body = {
max_tokens: 7909,
model: "gpt-4",
n: 1,
stream: false,
temperature: 0.1,
top_p: 1,
messages
}
I modified them in my local copy to use the model used in --openaiModel and --copilotModel and the issue with gpt-4 went away. I was experiencing the timeout issues mentioned in #18 as well and they also went away with the modificatin to github.ts
Those lines also need to be modified to account for the lower maxtokens allowed in gpt-3.5
helix-editor version helix 24.3 (b974716b)
helix-gpt version helix-gpt-0.31-x86_64-linux
Describe the bug I have the following config:
On the OpenAI usage dashboard, I'm noticing my gpt-4 usage increasing as I use helix.
helix-gpt logs The only possibly relevant lots are as follows:
helix logs No relevant helix logs
Does helix-gpt default to gpt-4 for certain actions? I'm really testing documentation generation.