Closed stefpiatek closed 10 months ago
Files
We have dummy files: I think the procedue concept use might need to be updated but this gives us the structure to get started
private.zip public.zip extract_summary.json
Started poking around with the files after extracting the data from the parquet files
from pathlib import Path
import pandas as pd
public_dir = Path("public")
private_dir = Path("private")
if __name__ == "__main__":
# MRN in people.PrimaryMrn:
people = pd.read_parquet(private_dir / "PERSON_LINKS.parquet")
# accession number in accessions.AccesionNumber
accessions = pd.read_parquet(private_dir / "PROCEDURE_OCCURRENCE_LINKS.parquet")
# study_date is in procedure.procdure_date
procedure = pd.read_parquet(public_dir / "PROCEDURE_OCCURRENCE.parquet")
# joining data together
people_procedures = people.join(procedure, on="person_id", lsuffix="_people")
joined = people_procedures.join(accessions, on="procedure_occurrence_id", rsuffix="_links")
# TODO filter by procedure concept to match the imaging type, could hardcode for now
joined[["person_id", "PrimaryMrn", "AccessionNumber", "procedure_date"]]
What is wrong with the current CSV method? If this is a performance concern, have any measurements been made to confirm this? Also, why are there multiple parquet files (that look like a 1:1 dump of OMOP tables) instead of say, a single parquet file that comes from an OMOP query, that is similar in format to the current CSV format?
Also, what was the difference between the private and public parquet files? And which parquet file are we using as an input for the PIXL cli and/or are we combining data from both files as the input?
What is wrong with the current CSV method? If this is a performance concern, have any measurements been made to confirm this? Also, why are there multiple parquet files (that look like a 1:1 dump of OMOP tables) instead of say, a single parquet file that comes from an OMOP query, that is similar in format to the current CSV format?
Nah not performance. OMOP ES is now defining the cohort definition, we want to use their output as the input to the tool so the workflow is simplified. They are indeed dumps of OMOP tables, we're going to publish public
parquet files to the DSH.
Also, what was the difference between the private and public parquet files? And which parquet file are we using as an input for the PIXL cli and/or are we combining data from both files as the input?
We have decided we're not doing filtering right now, but this is relevant for when we do:
@ruaridhg and I looked at the example parquet files and found the following two OMOP codes: “CT of chest” https://athena.ohdsi.org/search-terms/terms/4058335 “CT of thorax with contrast” https://athena.ohdsi.org/search-terms/terms/4327032 Firstly, are we interested in X-rays or CTs? I thought it was the former. Also, given the way the OMOP ontology works, there isn’t going to be a single code for most things. You have to traverse the “is-a” relationships to find what you want. I don’t think this will add that much complexity to the code though - we don’t have to query a live OMOP database; we could just do it once and hard code those values. Might get a bit unwieldy if we expand beyond chest x-rays.
Thanks @dcartner I wonder if its reasonable to ask for the omop es log to define which omop IDs are imaging for each export. That way we can process this generically
Good question, I'm not very familiar with the log file at the moment but will get back to you
Definition of Done / Acceptance Criteria
procedure_id
) added to rabbitmq messagessettings.cdm_source_name
in the json)datetime
in the json)Testing
convert current csv tests to use parquet files, may be easier for reviewing to create a helper function that reads in the test csv files and writes the required parquet files to a tmpdir. That way we can keep plain text as inputs so its easier to compare diffs in test inputs.
Documentation
Dependencies
Details and Comments
Current state
Currently there is a csv file input which defines the MRN, accession number and study datetime.
Info