aryarm / varCA

Use an ensemble of variant callers to call variants from ATAC-seq data
MIT License
22 stars 7 forks source link

excessive memory usage when predicting variants #21

Open aryarm opened 4 years ago

aryarm commented 4 years ago

The problem

@Jaureguy760 discovered that the predict_RF.R script uses an excessive amount of memory (up to ~400 times the size of its input data!). We should investigate why this is happening.

It might just be a quirk of the R tools we are using (ie mlr and ranger), so the following are some other solutions we could use if we can't get mlr and ranger to behave themselves.

Solution 1

There should be a way to make predictions on slices of the data at a time, so that we don't load the entire dataset into memory at once. Maybe we could do predictions on just 1000 rows at a time?

Solution 2

We could declare the expected memory usage of the predict rule via the resources directive, so that the cluster environment will know not to run too many of these jobs at the same time. We should add this to the predict rule:

resources:
    mem_mb=lambda wildcards, input, attempt: int(input.size_mb*500*attempt)

And add this parameter to the qsub command in the run.bash script:

-l h_vmem={resources.mem_mb}
aryarm commented 3 years ago

ok, just updating this for posterity: We tried solution 2, but it basically just caused none of the jobs to ever get run. So next up might be to figure out some way to do solution 1.

aryarm commented 3 years ago

update: this may be related to https://github.com/imbs-hl/ranger/issues/202