[x] ~The first run (or changes to configuration for Celery) you will need to run as the database admin instead of RW account to create tables -- modify the settings.py file temporarily and run:~
Note for above, the celery binary couldn't correctly create the tables on stage because of permissions problems and using separate RW/DBA accounts. I had to use the following queries in DBA mode first to create these tables manually before running, even though celery would normally manage the migrations and tables itself.
CREATE TABLE `kombu_message` (
`id` int NOT NULL AUTO_INCREMENT,
`visible` tinyint(1) DEFAULT NULL,
`timestamp` datetime DEFAULT NULL,
`payload` text NOT NULL,
`version` smallint NOT NULL,
`queue_id` int DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `FK_kombu_message_queue` (`queue_id`),
KEY `ix_kombu_message_timestamp_id` (`timestamp`,`id`),
KEY `ix_kombu_message_visible` (`visible`),
KEY `ix_kombu_message_timestamp` (`timestamp`),
CONSTRAINT `FK_kombu_message_queue` FOREIGN KEY (`queue_id`) REFERENCES `kombu_queue` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=48976 DEFAULT CHARSET=utf8mb3
CREATE TABLE kombu_queue (
id int NOT NULL AUTO_INCREMENT,
name varchar(200) DEFAULT NULL,
PRIMARY KEY (id),
UNIQUE KEY name (name)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb3
- [x] be sure settings.py has the correct values for RW database user and start. `celery -A ezidapp worker -l INFO` starts the app
- [x] This daemon could be run under `screen` which stays alive after disconnection (or other methods) for testing until we figure out the best way to daemonize it for system.d with Ashley.
- [x] Figure out how to daemonize Celery with Ashley and get added to system.d.
Items above have been done on dev/stg at this point except for the daemonization and working with Ashley to put in systemd.
Things to consider for system.d and daemon -- https://docs.celeryq.dev/en/stable/userguide/daemonizing.html
- [x] Getting the configuration right
- [x] CeleryBeat service is needed for task cleanup, but doesn't need an external service if you aren't using it for extensive scheduling or need an external store. Also ensure that tasks are actually getting cleaned up from the DB.
CeleryBeat can be manually started with
`celery -A ezidapp beat -s <directory-for-local-file-for-task-tracking>`
See https://docs.celeryq.dev/en/main/userguide/periodic-tasks.html#beat-custom-schedulers and https://stackoverflow.com/questions/13147581/celery-task-clean-up-with-db-backend and https://saadali18.medium.com/setup-your-django-project-with-celery-celery-beat-and-redis-644dc8a2ac4b
I'm documenting this since it's a little involved to remember it all the steps for a first deploy:
matomo_site_id, matomo_auth_token
. See https://ucopedu.atlassian.net/wiki/spaces/UC3/pages/45439538/SSM+ParameterStore for how to do this.settings.py
file temporarily and run:~CREATE TABLE
kombu_queue
(id
int NOT NULL AUTO_INCREMENT,name
varchar(200) DEFAULT NULL, PRIMARY KEY (id
), UNIQUE KEYname
(name
) ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb3