Data retention policy for failed jobs and triages directories on s3
New maap-dps-sandbox queue that is always running (on-demand) and minimal resources(8gb) to enables users to test their job inputs/environments as registered with DPS. Being on-demand will save time as user does not need to wait for worker startup. This queue is not meant to run the algorithms to completion, but just to test how the inputs would look within the DPS and does the run script activate the correct Environment, etc.
DPS failed jobs traceback to show only last 20 lines from _stderr to prevent large error logs to be shown on the faceted (Figaro) UI
Inform users / update documentation to indicate that they need to source activate the required env in all images now. This includes the default workspace specific env.
users should now be using conda environments like vanilla, pangeo, r, and isce3 in their respective workspaces instead of using the base conda environment. Highlight that all packages requested by the UWG are now in those conda environments- if they have more requests for packages reach out to @anilnatha or me
Some suggested topics:
Data retention policy for failed jobs and triages directories on s3
New maap-dps-sandbox queue that is always running (on-demand) and minimal resources(8gb) to enables users to test their job inputs/environments as registered with DPS. Being on-demand will save time as user does not need to wait for worker startup. This queue is not meant to run the algorithms to completion, but just to test how the inputs would look within the DPS and does the run script activate the correct Environment, etc.
DPS failed jobs traceback to show only last 20 lines from _stderr to prevent large error logs to be shown on the faceted (Figaro) UI