NHMDenmark / DaSSCo-Integration

This Repo will include integration of dassco storage from northtec
0 stars 0 forks source link

Protocol for handling MOS between HPC/Integration #54

Open Baeist opened 3 months ago

Baeist commented 3 months ago

How to handle when an MOS is found.

Should we have a separate api endpoint for receiving this information?

Do any jobs need to specifically know that they are dealing with an MOS (or maybe MSO)?

@ThomasAlscher1991

Baeist commented 3 months ago

@bhsi-snm @ThomasAlscher1991 I would like an example of what the data the integration server receives could look like for an mos asset and the label asset associated with it.

ThomasAlscher1991 commented 3 months ago
Example for MOS assets This is the real world data about two assets that are connected, one label asset and one specimen asset. guid barcodes in the file asset subject
7e8-3-15-0c-0e-1b-0-001-00-000-053cbc-00000 "Label", "Disposable 1" label
7e8-3-15-0c-1b-2a-0-001-00-000-0970e7-00000 "Disposable 1","42198273" specimen

The first calls to update metadata for both assets will come from the barcode reader to the _https://www.integration.bhsi.xyz/api/v1/update_asset_ endpoint and will contain the following data: 1) data_dict = { "guid": "7e8-3-15-0c-0e-1b-0-001-00-000-053cbc-00000", "job": "barcodeReader", "status": "DONE", "data": { "barcode": ["Label", "Disposable 1"], "asset_subject":"label"} }
json_dict = json.dumps(data_dict) response = client.post(URL, data=json_dict) 2) data_dict = { "guid": "7e8-3-15-0c-1b-2a-0-001-00-000-0970e7-00000", "job": "barcodeReader", "status": "DONE", "data": { "barcode": ["Disposable 1","42198273"], "asset_subject":"specimen"} }
json_dict = json.dumps(data_dict) response = client.post(URL, data=json_dict)

The second round of calls (that are actually connecting the assets) to the _https://www.integration.bhsi.xyz/api/v1/update_asset_ endpoint is the following: data_dict = { "guid": "7e8-3-15-0c-0e-1b-0-001-00-000-053cbc-00000", "job": "MOSConnector", "status": "DONE", "data": { "barcode": ["42198273"]} }
json_dict = json.dumps(data_dict) response = client.post(URL, data=json_dict) @Baeist does this help?

Baeist commented 3 months ago

@ThomasAlscher1991 Yes, this is exactly what i wanted to see.

I do think we should change how we do this. Having things other than actual defining barcodes go into the barcode field seems wrong. It also adds a lot of checks i need to do before updating the data from it. So i think adding a few extra fields in the request body or creating an extra endpoint to handle these cases is probably better.

Baeist commented 3 months ago

Updated with a new endpoint for receiving the barcodes and mos detection results.

Baeist commented 3 months ago

Separate API Endpoint: It will be helpful to have a separate API endpoint for receiving MOS-related information. This helps in maintaining clarity and organization within our API. By having a dedicated endpoint, we can implement specific logic for processing MOS data without mixing it with other types of data. This separation also makes it easier to manage and scale our API as our application grows.

Specific Job Handling: Depending on our application's requirements, certain jobs or processes may need to be aware that they are dealing with MOS data. MOS data often represents critical scheduling or operational information that may require special handling. For example, if we have background tasks or workflows that involve processing scheduling orders or coordinating operations based on MOS data, those tasks would need to be designed to recognize and handle MOS-related information appropriately. Ensuring that these tasks are aware of MOS-specific requirements helps maintain the integrity and accuracy of our application's scheduling and operational processes.

Specialized Processing: MOS data may require specialized processing or validation steps compared to other types of data in our system. For example, we may need to perform validation checks to ensure that the MOS data adheres to certain business rules or constraints before processing it further. Additionally, we may need to integrate with external systems or services that handle MOS-related operations, such as scheduling software or production management systems. By identifying MOS-specific processing requirements upfront, we can design our application's architecture and workflows to accommodate these needs effectively.

Overall, having a clear strategy for handling MOS data within our application helps ensure that our scheduling and operational processes are robust, efficient, and scalable. By defining dedicated endpoints, implementing specific handling logic, and considering specialized processing requirements, we can effectively integrate MOS data into our application's workflows while maintaining consistency and reliability.