HazyResearch / hyena-dna

Official implementation for HyenaDNA, a long-range genomic foundation model built with Hyena
https://arxiv.org/abs/2306.15794
Apache License 2.0
606 stars 83 forks source link

Chromatin Preprocessing #43

Open lhendre opened 10 months ago

lhendre commented 10 months ago

Im working on and using the Chromatin portion of the project; however, I am running into issues with the preprocessing. Can you provide any more details on how you generated the initial data? I have followed Sei and deepsea but I seem to be missing the train_hg38_coords_targets.csv. There been several versions of Sei and deepsea so Im currently going through the old versions but when I follow previous steps I seem to be missing that file?

cbirchsy commented 10 months ago

Hi @lhendre, I believe the original DeepSEA data can be downloaded from this link and you may also find this repo useful if you want to recreate it yourself

lhendre commented 10 months ago

Hi, I will try that shortly and get back to you!

lhendre commented 10 months ago

So Im going to try recreating the data or using that data and renaming it. But to verify, the code is specifically looking for something with the naming convention of {self.datapath}/{split}{self.ref_genome_version}_coords_targets.csv(link to the code below). That doesnt appear to exist or be created in the repo. That being said ill try the repo and try changing that name to the naming convention and using it instead

https://github.com/search?q=org%3AHazyResearch+coords_target&type=code

lhendre commented 10 months ago

Hi, So I have run the above code to generate the data(specifically utilizing the repo you suggested). The issue is still the same in that im not getting a coords_target.csv file. I have been experimenting the existing tsv, npy and mat files to csv to see if that file is produced from there but I havent had much luck

cbirchsy commented 10 months ago

Hi @lhendre, apologies for the delay. So the coordstarget.csv are just the debug{train|valid|test}.tsv files but with some small modifications (remove sequence, add prefix to target column names, remove duplicates, save as csv) which can be done with this snippet:

def create_coord_target_files(file, name):
    target_cols=pd.read_csv('data/deepsea_metadata.tsv', sep='\t')['File accession'].tolist() # metadata from build-deepsea-training-dataset repo
    colnames=target_cols+['Chr_No','Start','End']
    df = pd.read_csv(file, usecols=colnames, header=0)
    df.drop_duplicates(inplace=True)
    df.reset_index(drop=True, inplace=True)
    df.rename(columns={k:f'y_{k}' for k in target_cols}, inplace=True)
    df.to_csv(f'{name}_coords_targets.csv')
create_coord_target_files('debug_valid.tsv', 'val')
create_coord_target_files('debug_test.tsv', 'test')
create_coord_target_files('debug_train.tsv', 'train')
lhendre commented 10 months ago

Hi @cbirchsy, that appears to answer my questions! Im testing it now but this fills in the missing gaps. If this works would it make since for me to open a pr to update the instructions? And potentially adding in a small script for the create_coord_target_files?

lhendre commented 9 months ago

Hi, this appears to work for getting the preprocessing working, do let me know if it would make sense for me to add this in via a PR. We are right now trying to replicate the papers results regarding chromatin and are still running into some issues, so if there are any additional steps let me know, but we are still digging in on our side

jimmylihui commented 8 months ago

Hi, this appears to work for getting the preprocessing working, do let me know if it would make sense for me to add this in via a PR. We are right now trying to replicate the papers results regarding chromatin and are still running into some issues, so if there are any additional steps let me know, but we are still digging in on our side

I check your code, isn't output file out/train.tsv instead of train.mat, since your coordinate file read .csv?

lhendre commented 6 months ago

@jimmylihui are you referring to step 4?
Sorry for the delay, ended up switching jobs. Ill go back and double check this, but This uses a third party library to build the dataset and the instructions are here. The specific thing to notice is the save_debug_info flag is set to true.

If you dig into the build file Youll see it then creates the tsv file that is used in the coordinate reading file.

I will go back and double check this, but I can also lead a note to ensure there is no confusion if that is helpful

rubensolozabal commented 1 month ago

Hello, I confirm the script @cbirchsy works

def create_coord_target_files(file, name):
    target_cols=pd.read_csv('data/deepsea_metadata.tsv', sep='\t')['File accession'].tolist() # metadata from build-deepsea-training-dataset repo
    colnames=target_cols+['Chr_No','Start','End']
    df = pd.read_csv(file, usecols=colnames, header=0)
    df.drop_duplicates(inplace=True)
    df.reset_index(drop=True, inplace=True)
    df.rename(columns={k:f'y_{k}' for k in target_cols}, inplace=True)
    df.to_csv(f'{name}_coords_targets.csv') 

However, I tried to reproduce the results on the 7M training for 50 epoch and could only get AUROC 0.8

d_model: 256 n_layer: 8 l_max = 1024

P.s. I am training from scratch as commented in github (I could not find a 8 layer model pretained on 1k) Thank you


Solved my issue, now I can correctly reproduce the results in the chromatin prediction task:

At inference one needs to comment from chromatin_profile.yaml --> # for loading backbone and not head, requires both of these flags below # pretrained_model_state_hook: # name: load_backbone # freeze_backbone: false # seems to work much better if false (ie finetune entire model)