teknik-eksjo / chronos

A scheduling app that helps teachers submit workday outlines
MIT License
5 stars 5 forks source link

Created a celery beat task which removes inactive users every week #145

Open hugolundin opened 8 years ago

hugolundin commented 8 years ago

Closes issue #94

Greenheart commented 8 years ago

@hugolundin How should I QA this? Simply by rebuilding my containers and changing the values on line 39 in web/config.py to something in the close future?

Also, how can i veiw the logger-output? :)

Limpan commented 8 years ago

That's kind of it. It's hard to test it more thoroughly. In the future there should be a test or two for the task.

Logs can be viewed with docker-compose logs -f worker beat which will give you both the output from the worker container and the beat container.

Greenheart commented 8 years ago
sam@ubbe:~/Documents/github/chronos$ docker-compose logs -f worker beat
Attaching to chronos_worker_1, chronos_beat_1
worker_1      | /usr/local/lib/python3.5/site-packages/celery/platforms.py:812: RuntimeWarning: You are running the worker with superuser privileges, which is
worker_1      | absolutely not recommended!
worker_1      | 
beat_1        | [2016-05-19 20:33:39,803: INFO/MainProcess] beat: Starting...
worker_1      | Please specify a different user using the -u option.
beat_1        | celery beat v3.1.23 (Cipater) is starting.
worker_1      | 
worker_1      | User information: uid=0 euid=0 gid=0 egid=0
worker_1      | 
beat_1        | __    -    ... __   -        _
worker_1      |   uid=uid, euid=euid, gid=gid, egid=egid,
beat_1        | Configuration ->
worker_1      | [2016-05-19 20:33:42,170: WARNING/MainProcess] celery@c0df91bd65f2 ready.
beat_1        |     . broker -> amqp://rabbitmq:**@rabbitmq:5672//
worker_1      | 
beat_1        |     . loader -> celery.loaders.app.AppLoader
worker_1      | worker: Warm shutdown (MainProcess)
beat_1        |     . scheduler -> celery.beat.PersistentScheduler
worker_1      |  
beat_1        |     . db -> celerybeat-schedule
worker_1      |  -------------- celery@c0df91bd65f2 v3.1.23 (Cipater)
beat_1        |     . logfile -> [stderr]@%INFO
worker_1      | ---- **** ----- 
beat_1        |     . maxinterval -> now (0s)
worker_1      | --- * ***  * -- Linux-4.4.0-22-generic-x86_64-with-debian-8.4
worker_1      | -- * - **** --- 
worker_1      | - ** ---------- [config]
worker_1      | - ** ---------- .> app:         app:0x7fdf6d78da20
worker_1      | - ** ---------- .> transport:   amqp://rabbitmq:**@rabbitmq:5672//
worker_1      | - ** ---------- .> results:     redis://redis:6379/0
worker_1      | - *** --- * --- .> concurrency: 4 (prefork)
worker_1      | -- ******* ---- 
chronos_beat_1 exited with code 0
worker_1      | --- ***** ----- [queues]
worker_1      |  -------------- .> celery           exchange=celery(direct) key=celery
worker_1      |                 
worker_1      | 
chronos_worker_1 exited with code 0

Seems like the worker doesn't need to be run as sudo. Also I don't see any output from the worker or beat when I run it yesterday.

Limpan commented 8 years ago

The sudo warning isn't that big a deal when running in a container.

What schedule do you have? I would have expected some output...

Greenheart commented 8 years ago

@Limpan I'll most likely try this again tomorrow.

I must have missed something with the configuration of the schedule.