Closed tillkruss closed 1 year ago
So the question is"how". When a job times out, we have 500ms (currently) to release the job back onto SQS.
If we would also call the failed method in this timeframe, we would risk not releasing the job back onto SQS. Not the biggest issue, since the visibility timeout will expire and the job will be retried (though with an incorrect tries
count).
Mhh, right. I personally would rather fire the failed event and rely on the visibility timeout, because it can at least let cronitor know that the job timed out plus logging and such.
I guess this is controllable using shouldFailOnTimeout()
.
What do you think?
I don't think it makes sense tbh.
If you specify a timeout for a specific job and it timed out, sure, it makes sense.
But 99% of people don't set timeouts per job and only on the lambda. Say you set it to 15 seconds. At 14,5 seconds, we cancel execution and release the job. That's not the 15 you set. It we also would like to call the failed
hook, we need more time. How much? Hugely depends on actual code.
I think it's better to inform users that on timeout, the failed hook will (actually: can't) be called.
Right, but then the job will likely re-run again and "fail" (in the sense that it doesn't complete successfully) again and again?
If the job is not "failed", then it exists in a weird state where it is not successful yet it has little chance of being successful (assuming this isn't a temporary fluke that made the job time out).
The job will be flagged as failed and the tries count will be incremented. Default maxTries is 3, so after 3 timeout, the job won't be retried again.
This change goed merely about if the "onError" hook is called on timeouts.
ahh ok! Thanks for explaining!
@mnapoli what's your feeling on the matter? Should we try to trigger the hook no matter what, or should we consider timeouts an unavoidable dead end and only make sure the job gets flagged as such?
I think maintaining the job tries is important eh?
Consider timeout a job failure, increase tries count, release back into SQS, then try to call $job->fail()
as well
In all other cases we should be able to trigger $job->fail()
without any issue, right?
@georgeboot @mnapoli should we close this issue?
Anything missing like closing the database connection on job failures?
From what I can see, the bridge currently only closes the DB connection in the Octane runtime and when persist is false.
FPM uses pooling (right?) so that takes care of its own connections. FPM probably also closes down all active connections in the pool on shutdown?
For CLI and queue, its regular PHP. I assume the underlying PDO will close connections on shutdown? Not not too sure there.
And is shutdown even triggered? For queues we set a smaller timeout so that we can handle job failures etc. But not sure if php shutdown runs after that. Probably not.
@mnapoli can you confirm? You probably know a lot more about the internals of PHP.
FPM and Octane should be fine. The risky one is CLI and Queue IMO, if either hits the Lambda timeout it would build up connections and eventually run out.
Bref has all these runtimes:
BREF_LOOP_MAX
set (e.g. Octane), the PHP process is kept alive, it's up to the user (or the Octane runtime if used) to close connections properlyLaravel Queues runs with the function runtime, and with or without BREF_LOOP_MAX depending on the user config.
When PHP stops, the DB connections are automatically closed (https://stackoverflow.com/a/22944533/245552), so I wouldn't expect a problem for most users.
But if a user uses BREF_LOOP_MAX, that won't be the case. In that case we would need to cleanup/close the connections.
I think it would make sense to trigger the usual events, if there are 500ms available I'd expect that to be plenty enough. Also I don't find it shocking that the job times out 500ms before the actual Lambda timeout, we do the same thing in Bref for other scenarios (time out earlier than the Lambda timeout to correctly clean up stuff).
Awesome explanation, thanks!
If lambda times out, will PHP shut down (will the Lambda scheduler send a sigterm) or will it just die all of a sudden (sigkill or worse)?
Depending on that, we should manually exit the container on timeout, or leave the runtime to do so.
FYI in case you want to dig into how timeouts are handled in FPM: https://github.com/brefphp/bref/blob/ebb6bf37c5f83b35a79b67e7f210f565ebf3e476/src/Event/Http/FpmHandler.php#L119-L151 (to be clear, not applicable here, just sharing for context)
For the function
runtime (which includes Queues here) we looked at using signals: https://github.com/brefphp/bref/pull/895 That PR was never merged because it didn't work for FPM, but we could remove the FPM part (it's not needed anymore anyway) and merge it in the future. That solution would allow for a clean (and automatic) handling of timeouts for functions.
In any case, if we consider the current behavior today:
If lambda times out, will PHP shut down (will the Lambda scheduler send a sigterm) or will it just die all of a sudden (sigkill or worse)?
AFAIR it's a complete interruption of the container, we cannot clean anything. Things will just die all of a sudden.
Yeah oke. In that case, we should exit the process ourself so that PHP will close the db connections and other things.
Edit: hold on. My assumption was that a lambda instance would be cycled after a timeout happened. But is that indeed the case? Sounds logical but otherwise the next invocation might be cleaning up stuff for the previously timed out invocation.
But why do you stop and start fpm after a timeout?
In case of a timeout it is interpreted by Lambda like a runtime crash: the container is reused by all the processes are restarted.
So shutting down the PHP process in case of a timeout is a good approach I think.
@mnapoli https://github.com/brefphp/bref/pull/895 might be nice to have, even without the FPM support, especially for queues and Laravel's scheduled tasks (CLI)
@tillkruss yep I agree, it's just be lower on my priority list so far
When a queued jobs run longer and times out we should:
$job->failed()
is called (via #31)