Closed ahwanpandey closed 5 years ago
Oh and here is the R version and platform information:
R version 3.4.0 (2017-04-21)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: CentOS Linux 7 (Core)
I tried with 100GB RAM and still get errors on random jobs. The errors are not always the same:
1)
[2019-02-13 16:43:33] Plotting histogram of coverage...
Error in rowSums(as.matrix(finite_cases)) :
'Calloc' could not allocate memory (112432686 of 16 bytes)
Calls: plot_coverage ... remove_missing -> finite.cases -> finite.cases.data.frame -> rowSums
Execution halted
Warning message:
system call failed: Cannot allocate memory
2)
[2019-02-13 16:43:37] Plotting histogram of coverage...
Error in plyr::split_indices(scale_id, n) : std::bad_alloc
Calls: plot_coverage ... <Anonymous> -> f -> scale_apply -> <Anonymous> -> .Call
Execution halted
Warning message:
system call failed: Cannot allocate memory
3)
[2019-02-13 16:49:06] Loading file snp-pileup.csv.gz...
gzip: stdout: No space left on device
Error in fread(conn, select = c("Chromosome", "Position", "File1R", "File1A", :
Expected sep (',') but '^?' ends field 4 when detecting types from point 10: 2,85037300,.,.,29^?
Calls: readSnpMatrix2 -> fread
Execution halted
Warning message:
system call failed: Cannot allocate memory
4)
[2019-02-13 16:49:17] Loading file snp-pileup.csv.gz...
[2019-02-13 16:50:21] Plotting histogram of coverage...
Error: package or namespace load failed for ‘graphics’ in dyn.load(file, DLLpath = DLLpath, ...):
unable to load shared object '/config/binaries/R/3.4.0/lib64/R/library/graphics/libs/graphics.so':
/config/binaries/R/3.4.0/lib64/R/library/graphics/libs/graphics.so: failed to map segment from shared object: Cannot allocate memory
Error: segfault from C stack overflow
Warning message:
package ‘graphics’ in options("defaultPackages") was not found
Execution halted
Warning message:
system call failed: Cannot allocate memory
Do you know what could be happening??
This may be a trivial suggestion... 100GB is indeed a lot but are sure you don't have more than one job on the same node each asking for 100GB? If you are submitting jobs to a cluster via a job scheduler, it may be that the scheduler is not configured to prevent "overbooking" memory (I have seen that...). From the logs you post there are at least two jobs failing pretty much at the same time suggesting they are on the same node:
[2019-02-13 16:43:33] Plotting histogram of coverage...
[2019-02-13 16:43:37] Plotting histogram of coverage...
Again, if you can share via dropbox one of the problematic files I'll see if I can reproduce the issue.
Hi Dario,
That was exactly it! Thanks for your suggestion :)
Best regards.
Hey Dario,
I have come across some memory issues with my cnv_facets jobs. They occur at different parts of the analysis. For eg...
1)
2)
3)
I have run snp-pileup separately as follows which is what feeds into cnv_facets
I am running a few hundred samples and about 50% of them throw some sort of error as above. All the jobs get run with 1 thread and 32GB of RAM. I was thinking that it should be more than enough as per your recommendation:
All the Tumors are sequenced at ~80x and the Normals at ~40x. I will for sure try to bump the memory to an extreme of 64GB and try again but just wanted to get your input as well. It is strange that some jobs fail and some don't given they are around the same coverage samples.
Thanks in advance for your input!