Closed davidonlaptop closed 9 years ago
Allocate more memory for Spark by using the SPARK_DRIVER_MEMORY
and SPARK_EXECUTOR_MEMORY
variables
Start an ADAM container:
$ docker run --rm -ti -v /Users/david/data:/data gelog/adam bash
root@42c257dcfbcc:/#
Then, run ADAM with 1.5GB of RAM:
root@42c257dcfbcc:/# SPARK_DRIVER_MEMORY=1500m SPARK_EXECUTOR_MEMORY=1500m adam-submit transform /data/1kg/samples/hg00096/HG00096.chrom20.ILLUMINA.bwa.GBR.low_coverage.20120522.bam /data/adamcloud/hg00096.chrom20.adam
Spark assembly has been built with Hive, including Datanucleus jars on classpath
2015-06-01 20:36:24 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-06-01 20:36:33 WARN ThreadLocalRandom:136 - Failed to generate a seed from SecureRandom within 3 seconds. Not enough entrophy?
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
root@42c257dcfbcc:/#
Problem
If one of the process in the pipeline (Adam, Spark Driver / Executor, etc.) does not have enough memory, multiple errors may occur:
Stack trace
The following error occurs when the command is ran from within the ADAM docker container: