earl / beanstalkc

A simple beanstalkd client library for Python
Apache License 2.0
458 stars 115 forks source link

All 4 workers got the same reserved job ???? #46

Closed brailateo closed 10 years ago

brailateo commented 10 years ago

Hello there :-) I'm a successful beanstalk user for two years ( using it from go lang programs ) and everything is ok. Now I tried to wrap a java program into a Python shell, taking one job from a beanstalk queue and runing it into a subprocess. Everything was ok in tests with only one worker but when I ran 4 of them, I was unpleasantly surprised that all 4 workers started to work simultaneously for the same job, as all of them have reserved & took a bit from the first job in the tube :-(

I mention that the first attempt was to delete the job at the end of a good work but I was forced to immediatly delete the job after release, in order to make just one of 4 workers to do the job. It's something special that I have to do? The source is shameless simply as:

try:
    bsk = beanstalkc.Connection(host=server, port=11300, parse_yaml=lambda x: x.split('\n'))
    bsk.watch('registre')
    bsk.use('couchreg')
except:
    print "cannot open connection"
    sys.stdout.flush()
    sys.exit(1)

while 1:
    print time.strftime('%Y-%m-%d %X'), " waiting for work ..."
    sys.stdout.flush()
    job = bsk.reserve(timeout=60)
    if job is not None:
        # I had to delete it immediately
        job.delete()
        try:
            sys.stdout.flush()
            g=os.popen("/bin/bash doJavaWork.sh " + job.body)
            for line in g.readlines():
                stdout.write('%s' % line)
                if 'CASCADE_NEXT' in line:
                    bsk.put(job.body,20,1800)
            print "Finished it"
            sys.stdout.flush()
        except:
            print "bad things"
            sys.stdout.flush()
brailateo commented 10 years ago

ME DUMB ... :-) , the task that push the job leave TTR = 1 second and the job is longer than that !