After reserving a job, I cannot reserve again until I delete the job. That means I can't do any jobs in parallel.
What if I queue a job that takes 40 seconds due to some asynchronous call out to the internet. I can't handle any other jobs in the mean time, until that call returns and then I call destroy.
What I've been doing is calling reserve right after a reserve returns:
I think the right way to solve this issue is to spawn more workers. I think that's fine, but I think there needs to be clear documentation that this is the way it works.
After reserving a job, I cannot reserve again until I delete the job. That means I can't do any jobs in parallel.
What if I queue a job that takes 40 seconds due to some asynchronous call out to the internet. I can't handle any other jobs in the mean time, until that call returns and then I call destroy.
What I've been doing is calling reserve right after a reserve returns:
and for whatever reason, that does not work. I have to do
Is there a particular reason it has to work this way? Can beanstalkd not handle parallel requests?