Open YeagerAI-Bulat opened 1 month ago
OpenHands started fixing the issue! You can monitor the progress here.
OpenHands started fixing the issue! You can monitor the progress here.
OpenHands started fixing the issue! You can monitor the progress here.
OpenHands started fixing the issue! You can monitor the progress here.
The workflow to fix this issue encountered an error. Please check the workflow logs for more information.
OpenHands started fixing the issue! You can monitor the progress here.
The workflow to fix this issue encountered an error. Please check the workflow logs for more information.
OpenHands started fixing the issue! You can monitor the progress here.
The workflow to fix this issue encountered an error. Please check the workflow logs for more information.
OpenHands started fixing the issue! You can monitor the progress here.
The workflow to fix this issue encountered an error. Please check the workflow logs for more information.
OpenHands started fixing the issue! You can monitor the progress here.
Current behavior
All our plugin integrations (OpenAI and AsyncAnthropic) are being called "as is". Since all of these integrations incur in an API call, there might be errors (network, api credentials, timeouts, etc). Currently, when we call an LLM, it could fail for many reasons (timeout, error in the API, etc). We should handle these and, at least, retry one more time. If the result is an error, we should state that in the execution result for that validator.
Expected behavior
Therefore, we need to handle these errors in our Python code in order for it to be more robust, with at least 1 retry. If after retries it keeps failing, this needs to be stated that in the execution result for that validator. we should only catch the exceptions from the LLM integration, avoiding our exceptions.
Idea
This will probably be done with try catch, and we should only do it in the code calling external APIs in order to not catch our exceptions, which could lead to undesired behavior