Open Kamilcuk opened 1 year ago
This happens in case of error return from the api. In that case, the output does not contain data:
streaming lines, but just the error json.
This happens to me when I try to specify vim.g.ai_completions_model = "code-davinci-002"
or vim.g.ai_completions_model = "gpt-3.5-turbo"
and run the normal ctl-a action in normal mode
I got this error message when my free tier ran out. When I set up billing, the plugin started working again. I have not changed any settings. Would be nice if the plugin hinted at what could be the issue.
The message comes because the response from API on error is just a json, but response on normal operation starts with data:
prefix. The reading routine has to be fixed to properly handle response in case of error.
I still have the same problem, although I just subscribed to GPT-plus:
Error executing vim.schedule lua callback: /Users/tothlac/.vim/bundle/ai.vim/lua/_ai/openai.lua:83: Expected comma or object end but found T_END
at character 205
stack traceback:
[C]: in function 'decode'
/Users/tothlac/.vim/bundle/ai.vim/lua/_ai/openai.lua:83: in function 'on_stdout_chunk'
/Users/tothlac/.vim/bundle/ai.vim/lua/_ai/openai.lua:14: in function </Users/tothlac/.vim/bundle/ai.vim/lua/_ai/openai.lua:13>
Any ideas?
I just subscribed
Wait 5 min and try again.
It's still not working. Should I generate a new API key after subscribing to gpt plus, or do I need to do anything else to make it work again?
@tothlac @Kamilcuk
I made some observations and debugs in the code and solved the problem as follows.
After some analysis, debugs and code revisions I made some notes a little important but that will serve as future fixes. (as a helper I used the gun itself against herself to help me with the revisions) at the end of the file will be all the corresponding prints and videos.
Your environment variable OPEN_API_KEY must be properly set and make sure there is no other overwriting it in the.zshrc or.bashrc files. e.g. in your.zshrc or.bashrc :
export OPENAI_API_KEY="sk-14MH4C53R4NDR350LV3DTH3PR063M". (obviously this key dont exist).
I made this code to debug what was happening and the result is in the prin at the end of this review
vim.api.nvim_err_writeln(json_str)
local json = vim.json.decode(json_str)
if json.error then
on_complete(json.error.message)
buffered_chunks = ""
else
on_data(json)
end
In the exec function, there is an error variable that is being assigned from the vim.loop.spawn call. However, error is a reserved word of the Moon and should not be used as a variable name. To fix this, we can rename the variable to something like spawn_error. I made this changes below:
local handle
local spawn_error
handle, spawn_error = vim.loop.spawn(cmd, {
args = args,
stdio = {nil, stdout, stderr},
}, function (code)
stdout:close()
stderr:close()
handle:close()
vim.schedule(function ()
if code ~= 0 then
on_complete(vim.trim(table.concat(stderr_chunks, "")))
else
on_complete()
end
end)
end)
if not handle then
on_complete(cmd .. " could not be started: " .. spawn_error)
else
4.The M.completions function (line 105: openai.lua) sets a default value for the stream key of the body table. However, if the stream value has already been set in the body table, the default value is ignored. It is best to use vim.tbl_deep_extend to ensure that the table is extended correctly.
-- Errors raised
I will not make a pull request, because I believe that in the matter of correctly exporting the environment variavel would be a matter of user to user enqunato the other errors were debugged only by me in my machine without any kind of tests before, I will leave this issue to the created itself to take a look at the errors and solutions that I mentioned earlier, careful not to let these mistakes happen again. sorry for the bad english haha
I'm getting this error only from visual mode. Changing ai_edits_model
has not made a difference so far.
It looks like the edits API has been deprecated by OpenAI: https://openai.com/blog/gpt-4-api-general-availability
We'll need to look into porting parts of the plugin to the Chat Completions API.