Open techdragon opened 7 years ago
Hey folks,
I've spent a lot of time of research and I had the same issues. Any news? I must be use AWS ElasticBeanstalk, so I decided not to use Celery as broker because was a nightmare on ebs (not usually ). So after read documentation and looking for more about django-q I decided to use him instead celerykkkkk. But it's not working.
I tried to created a daemon on supervisord (the same of aws eb).
Can you have any idea for help us?
Cheers,
@techdragon @aisabellafontes A little off topic, but since you know how to do it. Ho do you configure EBS to run the qcluster at deployment ?
I have tried something like this in the .ebextensions:
qcluster: command: "source /opt/python/run/venv/bin/activate && python manage.py qlcuster
But I does not work well because, this blocks the deployment process forever as this command never returns. How do you do it ?
Thanks
Has anyone in the meantime managed to deploy django-q to aws elastic beanstalk?
I've largely moved away from using AWS Elastic Beanstalk, so other than this library which I came across a while ago, https://github.com/cuda-networks/django-eb-sqs... I've not really seen much else out there.
this was what worked for me:
https://gist.github.com/codeSamuraii/0e11ce6d585b3290b15a9ad163b9aa06
note: the file's names are reversed, the .sh is really the .config
@bastiW
I've ended up making my own simple reusable worker for Elastic Beanstalk - https://github.com/DataGreed/django-eb-sqs-worker
Would appreciate the feedback. Have to clean up the docs a little bit though.
this was what worked for me: https://gist.github.com/codeSamuraii/0e11ce6d585b3290b15a9ad163b9aa06
@timomeara could you please give me a hint. Besides the reverted file names, is there anything that can cause the problems? I am still getting TimeoutError on deploying.
@timomeara could you please give me a hint. Besides the reverted file names, is there anything that can cause the problems? I am still getting TimeoutError on deploying.
where is it timing out? look at the eb-activity log, you should see each command in your .config look for something like:
... Command 04_mkdir_for_log_and_pid] : Starting activity...
... Command 04_mkdir_for_log_and_pid] : Completed activity.
... Command 05_djangoq_configure] : Starting activity...
... Command 05_djangoq_configure] : Completed activity.
... Command 06_djangoq_run] : Starting activity...
... Command 06_djangoq_run] : Completed activity. Result:
my .config runs the django-q scripts last, after migrate, collect static and all that here's that part:
04_mkdir_for_log_and_pid:
command: "mkdir -p /var/log/djangoq/ /var/run/djangoq/"
05_djangoq_configure:
command: "cp .ebextensions/django-q-cluster.sh /opt/elasticbeanstalk/hooks/appdeploy/post/ && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/django-q-cluster.sh"
cwd: "/opt/python/ondeck/app"
06_djangoq_run:
command: "/opt/elasticbeanstalk/hooks/appdeploy/post/django-q-cluster.sh"
where is it timing out? look at the eb-activity log, you should see each command in your .config
Yeah, action of retrieving logs was timing out itself. 😅
I eventually just recreated EC2 instance and everything worked out.
Any hints how to adjust sh script to Amazon Linux 2?
@vlntsolo, did you have any success with this?
@RachellCalhoun yes, please see the gist https://gist.github.com/vlntsolo/34261e6026ac0e303c40c6ece9961182
AWS Elastic Beanstalk has a useful feature "Worker Environments", to separate the scale out and management of infrastructure for long running/background work handling. This automatically takes care of a number of things like pulling items from the SQS queue and sending failed jobs to a Dead Letter Queue, as a consequence it doesn't work directly with any existing Django task/worker system. Django-Q is already the easiest task system to get up and running with, so it would be really awesome if it had functionality to support taking full advantage of Elastic Beanstalk scaling out background work automatically, to further take away the pain of properly doing long running work in the background.
From my reading of the AWS Elastic Beanstalk Worker Environment Documentation and referencing the architecture section of the Django-Q documentation it looks like supporting this will require:
qcluster
, but using a modified versions of both the monitor and pusher components in order to handle tasks being pushed in by HTTP web requests rather than pulled in from the broker. Fundamentally it would:It seems like a lot, but from the time I've already spent reading the django-q code, I think most of the work will probably be making an alternative to
qcluster
that contains a web server and connecting that to the existing pieces the correct way, and much of the existing code will be unchanged. The primary changes being modifyingSentinel.spawn_pusher
to support spawning a different pusher which would also be passed the self.results_queue, if using the Elastic Beanstalk broker... and then writing an alternative 'pusher' that can do what is needed regarding handling the HTTP requests.Thoughts?