Thanks for developing this great tool! I just read your paper and would like to try this tool myself. However, in the first step, using 'get_mats.py' to process the raw contacts confused me.
In the datasets folder, you provided two sample contacts data, which I see that we use as the input for the 'get_hic_mat()' function and produce the 'data_1000000.pkl' file. But the following procedure will call the 'normalize()' function and take some '.meta' files as input to produce 'data_norm.pkl' file, and you did not give sample data of the .meta file nor mention the structure of it in the readme (or it is the same with .contacts?), which makes me don't know how to proceed.
I also have the same question. Also, when the images are resized, how the final imputation result is constructed from the imputed resized images? Thank you.
Hi,
Thanks for developing this great tool! I just read your paper and would like to try this tool myself. However, in the first step, using 'get_mats.py' to process the raw contacts confused me.
In the datasets folder, you provided two sample contacts data, which I see that we use as the input for the 'get_hic_mat()' function and produce the 'data_1000000.pkl' file. But the following procedure will call the 'normalize()' function and take some '.meta' files as input to produce 'data_norm.pkl' file, and you did not give sample data of the .meta file nor mention the structure of it in the readme (or it is the same with .contacts?), which makes me don't know how to proceed.
Could you help me with this issue?
Thanks