Closed niklasfink closed 2 weeks ago
Hey @niklasfink thanks for reporting this!
Could you post a screenshot showing the complete stack trace (feel free to mask out the directory/username bits for privacy).
We've added manual retries (and fixed some of the missing error catches) in https://github.com/Pythagora-io/gpt-pilot/pull/973 and I'd like to check if we've missed some edge case or if this is just due to the changes being in main but not released yet.
Hi @senko, the error shown is just the error output of the LLM API. In this case, I'm using litellm and it had a bug with long lasting responses. However, Pythagora/gpt-pilot also fails to detect the error when I cancel the connection for testing or when another LLM error ocurrs. In the previous version the error detection was reliable and I was able to easily repeat the previous LLM call in all of my cases.
Works better now with 0.2.2+
Version
VisualStudio Code extension
Suggestion
Unfortunately, the v0.2 doesn't have the retry option anymore, which could be used if the request to the LLM failed. A request to the LLM can temporary fail due to various reasons, so it made a lot of sense to be able to just repeat the action. In the current version, Pythagora just hangs saying "Waiting..." after the request has failed and you need to restart it completely.