leona / helix-gpt

Code assistant language server for Helix with support for Copilot/OpenAI/Codeium/Ollama
MIT License
285 stars 19 forks source link

[BUG] Looks like openai is using gpt-4 even though the config shows 3.5-turbo? #54

Open sbromberger opened 2 months ago

sbromberger commented 2 months ago

helix-editor version helix 24.3 (b974716b)

helix-gpt version helix-gpt-0.31-x86_64-linux

Describe the bug I have the following config:

[language-server.gpt]
command = "helix-gpt"
args = ["--handler", "openai", "--openaiKey", "123", "--openaiModel", "gpt-3.5-turbo-16k",  "--logFile", "/tmp/helix-gpt.log"]

On the OpenAI usage dashboard, I'm noticing my gpt-4 usage increasing as I use helix.

helix-gpt logs The only possibly relevant lots are as follows:

APP 2024-04-11T21:10:18.821Z --> fetch | /v1/chat/completions

APP 2024-04-11T21:10:24.465Z --> response | https://api.openai.com/v1/chat/completions | 200

helix logs No relevant helix logs

Does helix-gpt default to gpt-4 for certain actions? I'm really testing documentation generation.

sbromberger commented 2 months ago

PS: I forgot to add - this LSP server is amazing. Thank you for making it!

salva-ferrer commented 2 months ago

the files

https://github.com/leona/helix-gpt/blob/master/src/providers/openai.ts and https://github.com/leona/helix-gpt/blob/master/src/providers/github.ts

still reference gpt-4 in this block:

 const body = {
      max_tokens: 7909,
      model: "gpt-4",
      n: 1,
      stream: false,
      temperature: 0.1,
      top_p: 1,
      messages
    }

I modified them in my local copy to use the model used in --openaiModel and --copilotModel and the issue with gpt-4 went away. I was experiencing the timeout issues mentioned in #18 as well and they also went away with the modificatin to github.ts

Those lines also need to be modified to account for the lower maxtokens allowed in gpt-3.5