When analyzing the airsample datasets, I have modifyied the number of selected reads to 1.5 million (less than 10% of the data) and it subsamples them to 150.000 reads to calculate statistics.
That causes the process to take much longer than 4 hours.. for each dataset.
with the current set-up I have now modified the script so that the process run_coverage now has an extra label longtime that extends the used time to 24 hours. if it needs more then I need to explore why it is so slow.
When analyzing the airsample datasets, I have modifyied the number of selected reads to 1.5 million (less than 10% of the data) and it subsamples them to 150.000 reads to calculate statistics.
That causes the process to take much longer than 4 hours.. for each dataset.
with the current set-up I have now modified the script so that the process
run_coverage
now has an extra labellongtime
that extends the used time to 24 hours. if it needs more then I need to explore why it is so slow.