Webador / SlmQueueBeanstalkd

Beanstalkd adapter for SlmQueue module
Other
10 stars 26 forks source link

Restarting worker re-processes old jobs #41

Closed juriansluiman closed 9 years ago

juriansluiman commented 9 years ago

Currently experiencing this with SlmQueueBeanstalkd master and SlmQueue v0.4.

  1. Start worker
  2. Add some jobs, all get processed correctly
  3. Stop workerip
  4. Do not add any more jobs
  5. Start worker again

Expected result: worker awaits. Actual: worker start processing all jobs from previous run

$ php public/index.php queue beanstalkd default --timeout=1
This is a job::execute()
This is a job::execute()
^CFinished worker for queue 'default':
 - interrupt by an external signal on 'process.idle'
 - 4.51MB memory usage
 - 2 jobs processed
$ php public/index.php queue beanstalkd default --timeout=1
This is a job::execute()
This is a job::execute()

In above, the second two "This is a job::execute()" should not occur. Alternative testing code in a ZF2 test-controller, here the bug is not reproducable:

public function testPheanstalkAction()
{
    $pheanstalk = $this->getServiceLocator()
        ->get('SlmQueueBeanstalkd\Service\PheanstalkService');

    var_dump($pheanstalk->statsTube('default'));
    $pheanstalk->useTube('default');
    $pheanstalk->put('Dit is een test');
    $job = $pheanstalk->reserve(1);
    $pheanstalk->delete($job);

    var_dump($pheanstalk->statsTube('default'));
    exit;
}

In this scenario, the first var_dump() shows x jobs (e.g. 2) as ready and x as total. The second var_dump() shows x as jobs and x+1 as total. This means the php code in the controller is worker as expected.