LKINSEY / MetaDataGUI

Fast, easy UI that allows user to transfer raw experimental data and associated metadata to AIND cloud services after experiment is run
0 stars 0 forks source link

Add capsule_id and mount to api call #15

Closed arielleleon closed 1 week ago

arielleleon commented 2 weeks ago

In metaDataWorker.py, could you please add the following keys to your data structure where you submit a job to aind-data-transfer-service:

capsule_id: a2c94161-7183-46ea-8b70-79b82bb77dc0 mount: ophys

LKINSEY commented 1 week ago

When changing the mount, are you referring to the s3_prefix?

jtyoung84 commented 1 week ago

When changing the mount, are you referring to the s3_prefix?

As default, the pipeline mount will use the s3_prefix if not provided as default. Otherwise, it will use this specific string.

arielleleon commented 1 week ago

When changing the mount, are you referring to the s3_prefix?

@LKINSEY - the mount should the current mount name of the python. In this case, the single plane pipeline mount is "ophys". This will go in the input_data_mount below.

upload_job_configs = BasicUploadJobConfigs(
            s3_bucket=self.config.s3_bucket,
            platform=self.config.platform,
            subject_id=str(self.config.subject_id),
            acq_datetime=self.config.acquisition_datetime.strftime("%Y-%m-%d %H:%M:%S"),
            modalities=modality_configs,
            metadata_dir=PurePosixPath(self.config.destination) / self.config.name,
            process_capsule_id=self.config.capsule_id,
            project_name=self.config.project_name,
            input_data_mount=self.config.mount,
            force_cloud_sync=self.config.force_cloud_sync,
        )
jtyoung84 commented 1 week ago

When changing the mount, are you referring to the s3_prefix?

@LKINSEY - the mount should the current mount name of the python. In this case, the single plane pipeline mount is "ophys". This will go in the input_data_mount below.

upload_job_configs = BasicUploadJobConfigs(
            s3_bucket=self.config.s3_bucket,
            platform=self.config.platform,
            subject_id=str(self.config.subject_id),
            acq_datetime=self.config.acquisition_datetime.strftime("%Y-%m-%d %H:%M:%S"),
            modalities=modality_configs,
            metadata_dir=PurePosixPath(self.config.destination) / self.config.name,
            process_capsule_id=self.config.capsule_id,
            project_name=self.config.project_name,
            input_data_mount=self.config.mount,
            force_cloud_sync=self.config.force_cloud_sync,
        )

I sent Lucas a snippet with an updated example. I'm trying to move away from the top level process_capsule_id and input_data_mount in favor of the codeocean_configs field. This will still work though.

LKINSEY commented 1 week ago

included capsule id and mount now