Open lispercat opened 1 week ago
Is it only first question or further questions as well? Also depends on model, gpt-4o is a lot faster than claude for example. But on first question the plugin needs to fetch models, agents and check policies, question after will cache this data so should be fine.
it's all questions, first and subsequent. The default gpt-4o is selected. If I open nvim and just open CopilotChat it's pretty quick but as soon as I open a file from a project it gets really sluggish and seems like to get slower as I keep asking questions.
it's all questions, first and subsequent. The default gpt-4o is selected. If I open nvim and just open CopilotChat it's pretty quick but as soon as I open a file from a project it gets really sluggish and seems like to get slower as I keep asking questions.
well are you opening big file? as the content of the file needs to be sent every time when asking the question (default behaviour is sent whole buffer). and the history also needs to be sent every time, thats how every LLM works mostly. 10 seconds is still a lot tho, maybe it could be also curl related?
Can you output what :checkhealth CopilotChat
says?
I merged some optimizations but im still curious about your curl version + the size of file
Here is the checkhealth output: CopilotChat: require("CopilotChat.health").check()
CopilotChat.nvim ~
CopilotChat.nvim [core] ~
CopilotChat.nvim [commands] ~
CopilotChat.nvim [dependencies] ~
Can you try on latest canary? Added some status reporting for embedding files as well.
Also how big is the file again? char count/line numbers will do.
You could also try upgrading curl, 7.81 is very old and see if it helps.
Maybe there is a quick fix for it, but on the same machine when I talk to Copilot (for company account) via the Web interface it takes at most 2 sec, even when it has a lot of context and searches company files. When CopilotChat it takes ~10 sec even for very simple questions in which I don't ask to analyze any code. Is there a way to accelerate the process?