aduros / ai.vim

Generate and edit text in Neovim using OpenAI and GPT.
ISC License
293 stars 24 forks source link

openai.lua:83: Expected comma or object end but found T_END #18

Open Kamilcuk opened 1 year ago

Kamilcuk commented 1 year ago
    8 ||  /home/kamil/.vim/plugged/ai.vim/lua/_ai/openai.lua:14: in function </home/kamil/.vim/plugged/ai.vim/lua/_ai/openai.lua:13>
    7 || Error executing vim.schedule lua callback: /home/kamil/.vim/plugged/ai.vim/lua/_ai/openai.lua:83: Expected comma or object end but found T_END           at character 205
    6 || stack traceback:
    5 ||  [C]: in function 'decode'                                                                                                                    
    4 ||  /home/kamil/.vim/plugged/ai.vim/lua/_ai/openai.lua:83: in function 'on_stdout_chunk'                                                         
    3 ||  /home/kamil/.vim/plugged/ai.vim/lua/_ai/openai.lua:14: in function </home/kamil/.vim/plugged/ai.vim/lua/_ai/openai.lua:13>                   
Kamilcuk commented 1 year ago

This happens in case of error return from the api. In that case, the output does not contain data: streaming lines, but just the error json.

aaronik commented 1 year ago

This happens to me when I try to specify vim.g.ai_completions_model = "code-davinci-002" or vim.g.ai_completions_model = "gpt-3.5-turbo" and run the normal ctl-a action in normal mode

nicolaiskogheim commented 1 year ago

I got this error message when my free tier ran out. When I set up billing, the plugin started working again. I have not changed any settings. Would be nice if the plugin hinted at what could be the issue.

Kamilcuk commented 1 year ago

The message comes because the response from API on error is just a json, but response on normal operation starts with data: prefix. The reading routine has to be fixed to properly handle response in case of error.

tothlac commented 1 year ago

I still have the same problem, although I just subscribed to GPT-plus:

Error executing vim.schedule lua callback: /Users/tothlac/.vim/bundle/ai.vim/lua/_ai/openai.lua:83: Expected comma or object end but found T_END
 at character 205
stack traceback:
        [C]: in function 'decode'
        /Users/tothlac/.vim/bundle/ai.vim/lua/_ai/openai.lua:83: in function 'on_stdout_chunk'
        /Users/tothlac/.vim/bundle/ai.vim/lua/_ai/openai.lua:14: in function </Users/tothlac/.vim/bundle/ai.vim/lua/_ai/openai.lua:13>

Any ideas?

Kamilcuk commented 1 year ago

I just subscribed Wait 5 min and try again.

tothlac commented 1 year ago

It's still not working. Should I generate a new API key after subscribing to gpt plus, or do I need to do anything else to make it work again?

washonrails commented 1 year ago

@tothlac @Kamilcuk

I made some observations and debugs in the code and solved the problem as follows.

After some analysis, debugs and code revisions I made some notes a little important but that will serve as future fixes. (as a helper I used the gun itself against herself to help me with the revisions) at the end of the file will be all the corresponding prints and videos.

  1. Your environment variable OPEN_API_KEY must be properly set and make sure there is no other overwriting it in the.zshrc or.bashrc files. e.g. in your.zshrc or.bashrc :

    export OPENAI_API_KEY="sk-14MH4C53R4NDR350LV3DTH3PR063M". (obviously this key dont exist).

    1. There is a memory leak as the on_stdout_chunk function can accumulate chunks in the variable buffered_chunks, but it is never emptied. To address this, you can add a size limiter to the variable and empty it when the limit is reached or at the end of the process. (I'd say this problem is of medium priority... based on the response generated by OPENAI

    I made this code to debug what was happening and the result is in the prin at the end of this review

    vim.api.nvim_err_writeln(json_str)
          local json = vim.json.decode(json_str)

          if json.error then
        on_complete(json.error.message)
        buffered_chunks = ""
          else
              on_data(json)
          end
  1. In the exec function, there is an error variable that is being assigned from the vim.loop.spawn call. However, error is a reserved word of the Moon and should not be used as a variable name. To fix this, we can rename the variable to something like spawn_error. I made this changes below:

          local handle
    
      local spawn_error
    
      handle, spawn_error = vim.loop.spawn(cmd, {
          args = args,
          stdio = {nil, stdout, stderr},
      }, function (code)
          stdout:close()
          stderr:close()
          handle:close()
    
          vim.schedule(function ()
              if code ~= 0 then
                  on_complete(vim.trim(table.concat(stderr_chunks, "")))
              else
                  on_complete()
              end
          end)
      end)
    
      if not handle then
          on_complete(cmd .. " could not be started: " .. spawn_error)
      else

    4.The M.completions function (line 105: openai.lua) sets a default value for the stream key of the body table. However, if the stream value has already been set in the body table, the default value is ignored. It is best to use vim.tbl_deep_extend to ensure that the table is extended correctly.

    1. The vim.tbl_extend function (line336: shared.lua , line 106 and 119: openai.lua) is used to extend a table with default values. However, it modifies the original table. If the original table needs to be kept intact, it is best to create a new table and copy the values from the original table to the new table using vim.tbl_deep_extend.

    https://user-images.githubusercontent.com/98850074/231534748-e6858eb0-a32a-4552-96ce-37385cc8cf9d.mp4

-- Errors raised Captura de tela de 2023-04-12 12-03-34

Captura de tela de 2023-04-12 13-22-29

I will not make a pull request, because I believe that in the matter of correctly exporting the environment variavel would be a matter of user to user enqunato the other errors were debugged only by me in my machine without any kind of tests before, I will leave this issue to the created itself to take a look at the errors and solutions that I mentioned earlier, careful not to let these mistakes happen again. sorry for the bad english haha

fneu commented 1 year ago

I'm getting this error only from visual mode. Changing ai_edits_model has not made a difference so far.

aduros commented 1 year ago

It looks like the edits API has been deprecated by OpenAI: https://openai.com/blog/gpt-4-api-general-availability

We'll need to look into porting parts of the plugin to the Chat Completions API.