Email address is a required & validated field for submitting a request for dataset extracts. Email address is saved to the dataset_params.json file. The Celery task sends an email on completion of the requested extracts.
Note that it's necessary to modify the worker section of the docker-compose.yml file on the primary node. (Duplication of environment variables from the server section ensures that the former are available to the Celery worker for use with flask-mail.)
Successfully tested with up to 50 simultaneous jobs. Simultaneous jobs that use the same dataset_name seem to trigger SMTP errors, but that situation seems unlikely to occur in practice. (Simultaneous jobs with different dataset_names go through as expected.)
I'm still not sure why sfm_no_reply@email.gwu.edu wasn't working yesterday. We may want to monitor that closely in production to make sure users are getting notified.
For testing:
Create a new dataset extract.
Modify an existing extract and submit again.
Submit a few longer-running dataset extracts in parallel.
Email address is a required & validated field for submitting a request for dataset extracts. Email address is saved to the
dataset_params.json
file. The Celery task sends an email on completion of the requested extracts.Note that it's necessary to modify the
worker
section of thedocker-compose.yml
file on the primary node. (Duplication of environment variables from theserver
section ensures that the former are available to the Celery worker for use withflask-mail
.)Successfully tested with up to 50 simultaneous jobs. Simultaneous jobs that use the same
dataset_name
seem to trigger SMTP errors, but that situation seems unlikely to occur in practice. (Simultaneous jobs with differentdataset_name
s go through as expected.)I'm still not sure why
sfm_no_reply@email.gwu.edu
wasn't working yesterday. We may want to monitor that closely in production to make sure users are getting notified.For testing: