Closed jackdpeterson closed 2 years ago
Hi,
just a few things to clarify that are not clear to me and some random thoughts...
queue->push($job)
is not what you call the 'worker view’. This issue is with the 'queue->pop()’? (processing the queue by the worker)That being said...
So this issue exists both for SlmQueueBeanstalk and SlmQueueSqs. Not so much SlmQueueDoctrine (although these exception are easy to overlook, especially when the garbage delete setting for buried jobs is short).
Well Friday evening thoughts. I think I see some beer coming my way, so I'll sign off for now :p
Bas
On 4 sep. 2015, at 06:14, Jack Peterson notifications@github.com wrote:
After playing with a worker and passing somewhat large jobs to beanstalkD and wondering why jobs were just hanging there with no obvious reason for failure I finally wrapped my queue->push($job) in a try catch and discovered exceptions were being thrown but not caught and displayed anywhere that I could tell.
I'd like to propose a number of potential solutions for discussion:
echoing out exceptions in the worker so someone can at least see things pushing them out to stderr adding a place to add in a Zend\Log\Writer as an optional dependency to push log information out push things out via syslog (though I'm not sure about cross-OS compatibility issues). Any one of those would be helpful in terms of diagnosing weird issues like this.
Thoughts, recommendations, etc.?
— Reply to this email directly or view it on GitHub.
Thanks for following back up. I suppose I would like to see a default logging adapter in place as default condition or at least a recommended (and documented) default that pushes to syslog if available or out to stderr (if that's not already the case). SQS appears to have a max payload size of 256kb and BeanstalkD has a configurable max payload size. While one could and probably should go about writing their workers wrapped in try/catch blocks and plugging out to a logger of some kind ... that type of work seems more appropriate for AOP rather than being handled on a per-worker basis.
@jackdpeterson What would your suggestions be going forward?
We could also adapt the SQS queue such that a push of job that is greater than 256kb is not allowed. That would mitigate any strangeness, right?
throw new \InvalidArgumentException(self::BODY_TOO_LARGE_MESSAGE,self::BODY_TOO_LARGE_CODE)
If the size of the message body is > 256kb when constructing a job would seem to make sense to me at a high level. It's been a while since I've touched this project... but to me, that would be an explicit way to handle the issue for SQS in a way that wouldn't be odd for workers. Adding documentation to the same tone would probably be beneficial where the developer using SlmQueue is instructed that messages in a queue should be pointers to an object if it is moderately large (e.g., offload to S3 if in the AWS ecosystem) or some equivalent object store rather than assuming objects can be pushed in of any arbitrary size. It's an easy mistake to make at first.
I do use the Doctrine queue with large payloads. So I guess it really depends on what storage engine you have. So I would say it makes sense to implement these checks only locally on the adapters.
If we first implement https://github.com/JouwWeb/SlmQueue/issues/28 we can then let the SQS queue throw such exception. I'll rename the PR.
I'll close since there has been no activity lately. Obviously feel free to open a PR with such a feature :).
After playing with a worker and passing somewhat large jobs to beanstalkD and wondering why jobs were just hanging there with no obvious reason for failure I finally wrapped my queue->push($job) in a try catch and discovered exceptions were being thrown but not caught and displayed anywhere that I could tell.
I'd like to propose a number of potential solutions for discussion:
Any one of those would be helpful in terms of diagnosing weird issues like this.
Thoughts, recommendations, etc.?