Webador / SlmQueueBeanstalkd

Beanstalkd adapter for SlmQueue module
Other
10 stars 26 forks source link

Missing SlmQueueBeanstalkd\Job\Exception #26

Closed kusmierz closed 10 years ago

kusmierz commented 10 years ago

In version 0.3/master folder "SlmQueueBeanstalkd\Job\Exception" is missing.

kusmierz commented 10 years ago

@juriansluiman why you've removed it?

https://github.com/juriansluiman/SlmQueueBeanstalkd/commit/dac19e39a0fe4e3364381c23e862a0d39786949f

juriansluiman commented 10 years ago

See my comment there but it's to make the control flow between different implementations more equal. What if you want to release with a priority and/or delay? Inside the job, you can deal with those things direcly.

kusmierz commented 10 years ago

Thanks, indeed it works. Please change this information in your documentation!

kusmierz commented 10 years ago

@juriansluiman it seems it doesn't work in 100%: I've got Pheanstalk_Response::RESPONSE_NOT_FOUND response. Why? probably because I'm releasing a job... which is deleted later (src/SlmQueueBeanstalkd/Worker/BeanstalkdWorker.php:32)

kusmierz commented 10 years ago

@juriansluiman ?

kusmierz commented 10 years ago

@juriansluiman any chances for help?

juriansluiman commented 10 years ago

@kusmierz I will get into this later. Unfortunately, I have quite a lot to do until Friday / Saturday, so I guess I won't have time to check it out earlier.

To give me a hand what is going on, can you reproduce the case any time and have you a clue about the root cause of the bug?

kusmierz commented 10 years ago

@juriansluiman

  1. the job is executing // I'm using beanstalk console and xdebug, so I see everything clear, I'm debugging line by line
  2. there is new job in "reserverved" state now
  3. I'm doing $this->getQueue()->release($this); inside executemethod
  4. The job is now waiting for execution (because I've addedd "delay" option), so I can see it in beanstalk console - it will wait anyway, because only one worker is collecting jobs
  5. Everything is fine, but next step will kill the job - this lines exactly. The job is deleted before it can be executed.

Anyway - I could push a job (a clone?) and it seems to work, but I'd like to make it better.

Maybe execute should return value? If it will return null (void) or true - the job should be deleted, but false shall not delete it.

juriansluiman commented 10 years ago

@kusmierz I have played with it for now and I see the failure I was afraid of. The job gets released, is deleted and then poped as missing. I have tested a few approaches for short term and long term.

First, by cloning the job you create a new instance which is pushed as a new entry in beanstalk. Therefore you get a new id and all is well. This is only problematic when you track jobs by their identifiers.

Alternatively, we must agree upon this PR (juriansluiman/SlmQueue/pull/83) and enable status codes for jobs. The beanstalkd adapter could use the release instead of the delete call. This is similar to your suggestion, but it's more extendible for other types of adapters (and we'd like to keep SlmQueue adapter agnostic).

kusmierz commented 10 years ago

Hm, so what would you recommend by now?

juriansluiman commented 10 years ago

Mm, I did close this issue?

Anyway, first I would clone the job for now. That's the most easiest way. If you need to track the job status, I would extend the Worker class and override this in the module.config.php by your own factory. In your implementation, you could start already with status codes. The third option is to wait for the PR I mentioned to be merged (ans soon we'll start for SlmQueue v0.4 where it's incorporated). Depending on your situation, I'd pick one of them.

kusmierz commented 10 years ago

with cloning... should I push a job (as I'm doing it now) or could I release it? I think I'm going to clone it and after code merge I'll use your solution. Thanks for your help!

juriansluiman commented 10 years ago

What I tested was to clone the job and push that one as a new job into the queue. You can't release the (cloned) job since that one isn't known by beanstalkd yet.

$job = clone $this;
$this->getQueue()->push($job);

You might want to consider a $count or so to make sure the job isn't cloned too often (code is untested):

class MyJob extends AbstractJob implements QueueAwareInterface
{
    use QueueAwareTrait;

    const MAX_RETRIES = 5;

    public function __construct($count = 0)
    {
        $this->setMetadata('count', $count);
    }

    public function execute()
    {
        // do some work

        if (!$result) {
            $count = (int) $this->getMetadata('count');
            if ($count > self::MAX_RETRIES) {
                throw new RuntimeException(sprintf(
                    'Stop pushing this job, already tried %s times ', $count
                ));
            }

            $job = clone $this;
            $job->setMetadata('count', ++$count);
            $this->getQueue()->push($job);
        }
    }
}
kusmierz commented 10 years ago

Ok, so I'm doing it right with pushing, thanks.

I've tried to use metadatas also, but it doesn't work at all, at least with beanstalk. Why?

see \SlmQueueBeanstalkd\Queue\BeanstalkdQueue::pop's last line

return $this->createJob($data['class'], $data['content'], array('id' => $job->getId()));

It always will overwrite metadata with only one element (id). So it doesn't work :(

juriansluiman commented 10 years ago

Good catch, I tried it locally with setContent() and that worked. Typing it down here I thought the metadata was more logical to use. I remember the issue as I discovered it myself as well quite some time ago but all I remembered that it was fixed already.

I created issue #33 for this issue.

kusmierz commented 10 years ago

Any plans to implement correct release() here? Or pushing cloned job is final way?

juriansluiman commented 10 years ago

@kusmierz what I wrote here is the suggested way for now. When this PR gets merged, you can update your code and implement result return codes.

kusmierz commented 10 years ago

@juriansluiman OK, thanks, good to know :) So I'm waiting for it.