Closed davidgilbertson closed 1 month ago
I just tested with llama3.2 running locally via Ollama and that works, although it puts a full markdown codeblock in my code, with the backticks.
And it works with Claude too:
So the issue is somewhere at the intersection of
Thank you for reporting! I will have a look at it soon.
You work fast, nice!
What happened?
I select a function, and perform the "Edit code" option and type "Make this code better".
It just puts
NULL
where the code was.If I use the chat instead it works, using the same model, so I know it's picking up my API key correctly.
I believe my settings are correct:
All the OpenAI > GPT-* models do the same thing, insert
NULL
where the code was. Theo1-preview
model raises an error, which I think was reported elsewhere.If I change the model to GPT-4o-mini - FREE it works (although it puts the changed code BEHIND the popup, so now I need to grab the mouse to move it to see if I want to accept it or not).
Relevant log output or stack trace
Steps to reproduce
Not sure, I just just installed the extension, tried it, and it gave this error. I tried it in another project and got the same error. Note that the
Request closed
is firing about 1 ms after theRequest opened
, is that a clue?CodeGPT version
2.11.6-241.1
Operating System
Windows