jawa0 / aish3

AI LLM Agent with memories
Apache License 2.0
3 stars 1 forks source link

When receiving streaming LLM response tokens, app animation is jerky #61

Open jawa0 opened 11 months ago

jawa0 commented 11 months ago

Especially noticeable in the gradient pulse of the LLMChatContainer bounding rect. There must be some bad interaction between the SDL event loop and something blocking on receiving tokens.

Consider Python asyncio