af009 / fuku

Ollama Integration for Godot
MIT License
21 stars 1 forks source link

Longer response time versus terminal? #1

Closed BirkinSornberger closed 4 weeks ago

BirkinSornberger commented 1 month ago

Apologies if this is a rookie question, but my response times seem much slower using this add-on, versus the speed I get when prompting the AI in my terminal. I can't think of any reason for this since everything is running locally.

I am using codellama:13b with an RTX 3060. I have Ollama installed and the model is running. Through the terminal, the responses are immediate, and they complete extremely quickly. In the Godot plugin I seem to have to wait a full 30 seconds - 1 minute to get a response. Is this just because of how Ollama handles requests? I can't imagine it's supposed to be this slow.

af009 commented 4 weeks ago

Hi, there may be a slight delay in how messages are displayed. In the cmd, responses appear more quickly because the text is written progressively, while in the plugin, the full answer is shown all at once after it’s fully generated. However, I’ve made some improvements that should enhance the experience. I'll push an update soon.

BirkinSornberger commented 4 weeks ago

Hi, thanks for the response. I'll be looking forward to it. I'm also curious, is there any way to move the "Fuku" window to the bottom of the editor, where the output/debugger/audio... etc are located?

af009 commented 3 weeks ago

Hi, thanks for the response. I'll be looking forward to it. I'm also curious, is there any way to move the "Fuku" window to the bottom of the editor, where the output/debugger/audio... etc are located?

I added an option for that in the new update.