user uploads some files for processing
The changes in step 1. are not reflected in step 2, because the data server caches metadata only
daily. Rather than increase the frequency of caching, a lower-overhead approach would be to
refresh metadata only for the project chosen by the user when uploading a new batch of data.
So:
[ ] upload server should cause a refresh of metadata for the project specified by a user at upload
time, before queuing the uploaded files for processing, so that new summary data products
end up in sensible locations
[ ] for cleanup of past products: when deployment records change for a receiver, queue a job to
re-export its summary product (?) needs some thought to avoid doing so needlessly, e.g.
twice for a data-file upload.
For this typical use case:
user uploads some files for processing The changes in step 1. are not reflected in step 2, because the data server caches metadata only daily. Rather than increase the frequency of caching, a lower-overhead approach would be to refresh metadata only for the project chosen by the user when uploading a new batch of data. So:
[ ] upload server should cause a refresh of metadata for the project specified by a user at upload time, before queuing the uploaded files for processing, so that new summary data products end up in sensible locations
[ ] for cleanup of past products: when deployment records change for a receiver, queue a job to re-export its summary product (?) needs some thought to avoid doing so needlessly, e.g. twice for a data-file upload.