slapp created manifests from a postgres database.
Now we want to create more labeling manifests based on the new ophys_etl pipeline output, which is not in said postgres database. The current dev-mode outputs from which we'll want to sample and upload to slapp are, for example, here:
/allen/aibs/informatics/danielk/dev_LIMS/supplemental/1003770203
slapp
created manifests from a postgres database. Now we want to create more labeling manifests based on the newophys_etl
pipeline output, which is not in said postgres database. The current dev-mode outputs from which we'll want to sample and upload toslapp
are, for example, here:/allen/aibs/informatics/danielk/dev_LIMS/supplemental/1003770203
We need some method(s) to inject those data sources into the
slapp
transform pipeline, replacing the postgres-dependent steps here: https://github.com/AllenInstitute/segmentation-labeling-app/blob/37c53df0186169e1338d90f88536e1f3fe76f070/slapp/transforms/transform_pipeline.py#L192-L209This likely will need a new ROI constructor, and some way of passing in the movie path.
Tasks
slapp
branch that can ingest these alternative data sourcesValidation