What is causing the application to timeout at "/api/llm/step2" in production? OpenAI response is slow
Error occurs periodically: Can fail 5 times in a row, then work 5 times in a row Speed goes up and down
Seems to work fine both Locally and in Staging, but could also be chance as periodically this issue doesn't occur at all. Probably is the same as issue stems from OpenAI response time
Frontend calls to backend for LLM response seems slower in general when compared to June. We measured the response time for AzureOpenAi calls and they seemed fine, so the issue might be something else in Innotin's backend Response time was fast since ReadableStream is returned immediately. Content is still slow.
Fix
[x] Move reading the stream from the backend to the frontend. This will prevent timeouts and also give better visual queue to the user that the app is responding.
[x] Fix issue where "Next step" button appear before AiResponse streaming has been completed
[x] Fix issue where "Edit AiResponse" button appear before AiResponse streaming has been completed
Issue
Fix