Closed riverma closed 1 year ago
Note, in addition to this template, I propose creating the following labels for easy sorting:
@riverma - when do you envision the OPS
venue being selected for a request? Messing with our operations venue, or with the operators themselves, seems very risky.
The original intent of this form was for other teams to request runs for testing, or VnV, etc. Maybe you're wanting to use this for tracking on-demand requests for granules missed somehow in normal operations?
@riverma - when do you envision the
OPS
venue being selected for a request? Messing with our operations venue, or with the operators themselves, seems very risky.The original intent of this form was for other teams to request runs for testing, or VnV, etc. Maybe you're wanting to use this for tracking on-demand requests for granules missed somehow in normal operations?
@sjlewis-jpl - yes exactly, the intent for the OPS
venue was to track future possible on-demand requests more formally. Though we should be clear about the budgeting for such requests prior to accepting it. We can always reject certain processing requests with fields unsupported for the time being, but it might be helpful for (at the least) accounting purposes.
The software version fields seem odd. It looks like someone can mix and match the SW versions, but they're not actually independent - each PCM version has specific PGE versions, and each PGE version has a specific SAS version.
Maybe we should just condense down to a single SW version, say, Software Version
, or SDS Version
, or System Version
?
Overall this form looks great. There are a few small changes to make, in the bullet list below.
- [x] Remove PGE & PCM versions, replace with System Version
- [x] Under "Share Results", specify that the DAAC option is for their UAT instance. Also add a a note that requests in the OPS venue will go to the DAAC (not a choice).
- [x] Add a venue called
VnV
(orInT
?), for requests related to SDS or System VnV.
@sjlewis-jpl - take a look at the rendering and let me know what you think of the changes.
- [ ] Add a subdirectory to capture previous runs, and include (at least to start) files representing the R1 validation dataset, and any others we have available. These are reference-able in the Input Data field. (If you prefer, I'm open to splitting this off into another PR)
Hmm. Not quite sure about what you'd like here? Add a subdirectory to what exactly? A place in GitHub to store run results?
Purpose
Proposed Changes
Issues
Testing