Closed talkhanz closed 9 months ago
Sorry, I don't have a pipeline for pulling things off the ENCODE portal which is where the processed MPRA data lives. Supplementary Table 1 contains accession IDs for counts data on the ENCODE portal. You can use these IDs to find the linked Dataset ID. The ENCODE Dataset will have all of the Processed Data files under the "File details" tab. Once you're at this list, look for "bed element enrichments" and "GRCh38" for File type and Genome assembly, and those are the files you'll want. Here's the spec for the BED6+5 files.
For example:
I can try to put together something automated to collect the relevant ENCODE data, but manual collection is what's happening now. I think the ENCODE consortium will be publishing a paper soon about retrieving/analyzing the MPRA data on the portal, so referring to that will likely be the long term best practice.
Thanks for getting in touch and sharing the details. May I ask what are the target columns in SupTable 2 - UKBB_GTEX_CODA_averaged_no_cutoffs.txt that the model is learning to predict?
I'll be closing this soon
Apologies for the delay. {K562,HepG2,SKNSH}_mean
are the appropriate columns.
@talkhanz I wanted to let you know that the supplementary table was updated is is available here. In this new file predicted features are activity_columns=['K562_log2FC', 'HepG2_log2FC', 'SKNSH_log2FC']
and we also use stderr_columns=['K562_lfcSE', 'HepG2_lfcSE', 'SKNSH_lfcSE']
for preprocessing.
hey @sjgosai.
Do we have an example python script of the datapreprocessing mentioned in paper? basically going from the encode files for MPRA to supplementary table 2 (SupTable 2 - UKBB_GTEX_CODA_averaged_no_cutoffs.txt) in the paper