joshcho / ChatGPT.el

ChatGPT in Emacs
GNU General Public License v3.0
397 stars 34 forks source link

Given the slow processing time of requests, is there a way to generate the text word by word? #20

Closed ziova closed 1 year ago

ziova commented 1 year ago

Similar to the implementation in gpt.el

joshcho commented 1 year ago

I will look into that over the holidays. The slow processing time is very painful.

joshcho commented 1 year ago

EDIT: It turns out a tweak to chatgpt-wraper can support this.

dyereh commented 1 year ago

Do you have a branch with this tweak added that I we could try?

joshcho commented 1 year ago

No, I haven't gotten around to implementing this unfortunately. I'd be happy to accept any pull requests.

joshcho commented 1 year ago

chatgpt-wrapper already supports this feature, and it's a matter of changing the interface between Python epc server and ChatGPT.el. See https://github.com/mmabrouk/chatgpt-wrapper/issues/37#issuecomment-1385306342

joshcho commented 1 year ago

Also, I am not entirely certain, but comint-mode might be helpful here. https://github.com/joshcho/ChatGPT.el/issues/3

anonimitoraf commented 1 year ago

Thanks so much for this package!

I've changed chatgpt.py's bot.ask(query) to bot.ask_stream(query). Results in an error:

"TypeError(\"Object of type '<class 'generator'>' cannot be converted by `tosexp`. It's value is '<generator object ChatGPT.ask_stream at 0x10619a650>'\")"

which makes sense. Any ideas on how a "generator" response could be processed?

joshcho commented 1 year ago

Addressed in https://github.com/joshcho/ChatGPT.el/commit/c38c9150b4f156c90c44f330aa0cd2497d66f8c5