awsdocs / amazon-emr-release-guide

The open source version of the Amazon EMR Release Guide. You can submit feedback & requests for changes by submitting issues in this repo or by making proposed changes & submitting a pull request.
Other
28 stars 43 forks source link

Cannot launch EMR cluster release label- 5.11.0 #13

Closed shresthaankit7 closed 3 years ago

shresthaankit7 commented 5 years ago

Hello, I've been using EMR release version 5.0.0 for the past 2 years now and now I'm trying to upgrade the EMR to release label 5.11.0 as per my company standards.

Currently, I am using AWS modules of version 1.11.39. I am able to launch clusters (release labes of both the 5.11.0 and 5.0.0) using modules of this version(1.11.39) but cannot launch the cluster if I upgrade my AWS module's version to 1.11.221 or the latest release version.

The exception message is: Caused by: com.amazonaws.services.elasticmapreduce.model.AmazonElasticMapReduceException: An Internal server error occurred (Service: AmazonElasticMapReduce; Status Code: 500; Error Code: InternalFailure; Request ID: 6308586c-2840-1e64bd) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduceClient.doInvoke(AmazonElasticMapReduceClient.java:1898) at com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduceClient.invoke(AmazonElasticMapReduceClient.java:1874) at com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduceClient.executeRunJobFlow(AmazonElasticMapReduceClient.java:1655) at com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduceClient.runJobFlow(AmazonElasticMapReduceClient.java:1631)

Please help me understand what am i doing wrong.

shresthaankit7 commented 5 years ago

I was able to launch cluster for release label 5.11.0 with the AWS SDK of 1.11.221 but without providing the Hadoop configurations. However, whenever I use Hadoop configuration, I get the same 500 Internal Server error.

Configuration looks like this: [{ "classification": "core-site", "properties": { "fs.s3a.access.key": "********", "fs.s3.awsAccessKeyId": "********", "fs.s3a.impl": "org.apache.hadoop.fs.s3a.S3AFileSystem", "hadoop.proxyuser.mapred.hosts": "*", "hadoop.proxyuser.mapred.groups": "*", "io.compression.codec.lzo.class": "com.hadoop.compression.lzo.LzoCodec", "fs.s3.awsSecretAccessKey": "********", "io.compression.codecs": "com.hadoop.compression.lzo.LzoCodec", "fs.s3a.buffer.dir": "${hadoop.tmp.dir}/s3a", "fs.s3a.secret.key": "********" }, "configurations": [] }, { "classification": "mapred-site", "properties": { "mapreduce.reduce.shuffle.parallelcopies": "20", "mapreduce.task.io.sort.mb": "512", "mapreduce.tasktracker.reduce.tasks.maximum": "10", "mapreduce.map.speculative": "false", "mapreduce.output.fileoutputformat.compress": "true", "mapreduce.output.fileoutputformat.compress.codec": "com.hadoop.compression.lzo.LzoCodec", "mapred.child.java.opts": "-Xmx3500m", "mapreduce.job.reduce.slowstart.completedmaps": "0.99", "mapreduce.tasktracker.map.tasks.maximum": "13", "mapreduce.task.io.sort.factor": "48", "mapreduce.reduce.java.opts": "-Xmx4500m", "mapreduce.map.memory.mb": "4096", "mapreduce.map.output.compress.codec": "com.hadoop.compression.lzo.LzoCodec", "mapreduce.job.reduces": "40", "yarn.app.mapreduce.am.command-opts": "-Xmx2000m", "mapreduce.reduce.memory.mb": "5120", "mapreduce.map.java.opts": "-Xmx3800m", "mapreduce.reduce.speculative": "false", "yarn.app.mapreduce.am.resource.mb": "2048" }, "configurations": [] }, { "classification": "yarn-site", "properties": { "yarn.nodemanager.aux-services": "mapreduce_shuffle,spark_shuffle", "yarn.nodemanager.resource.cpu-vcores": "36", "yarn.nodemanager.resource.memory-mb": "57344", "yarn.application.classpath": "$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*,/data/cascading/lib/*,/usr/lib/hadoop-lzo/lib/*,/usr/share/aws/emr/emrfs/conf,/usr/share/aws/emr/emrfs/lib/*,/usr/share/aws/emr/emrfs/auxlib/*,/usr/share/aws/emr/lib/*,/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar,/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar,/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar,/usr/share/aws/emr/cloudwatch-sink/lib/*", "yarn.scheduler.maximum-allocation-vcores": "36", "yarn.scheduler.maximum-allocation-mb": "57344", "yarn.scheduler.minimum-allocation-mb": "512", "yarn.nodemanager.aux-services.spark_shuffle.class": "org.apache.spark.network.yarn.YarnShuffleService" }, "configurations": [] }, { "classification": "hdfs-site", "properties": { "dfs.blocksize": "134217728" }, "configurations": [] }, { "classification": "capacity-scheduler", "properties": { "yarn.scheduler.capacity.root.acl_submit_applications": "hadoop,yarn,mapred,hdfs", "yarn.scheduler.capacity.root.queues": "default", "yarn.scheduler.capacity.root.default.acl_submit_applications": "hadoop,yarn,mapred,hdfs", "yarn.scheduler.capacity.root.default.capacity": "100", "yarn.scheduler.capacity.root.default.state": "RUNNING" }, "configurations": [] }, { "classification": "hadoop-env", "properties": {}, "configurations": [{ "classification": "export", "properties": { "HADOOP_CLASSPATH": "\"${HADOOP_CLASSPATH}:/home/hadoop/.driven-plugin/:/data/cascading/lib/*\"" }, "configurations": [] }] }, { "classification": "yarn-env", "properties": {}, "configurations": [{ "classification": "export", "properties": { "YARN_USER_CLASSPATH": "\"${YARN_USER_CLASSPATH}:/home/hadoop/.driven-plugin/\"" }, "configurations": [] }] }, { "classification": "spark-defaults", "properties": { "spark.executor.memory": "8G", "spark.driver.memory": "10G", "spark.executor.cores": "5", "spark.executor.instances": "21" }, "configurations": [] }]

This same configuration is used to launch 5.0.0 clusters and is working fine. Just can't use to launch cluster for 5.11.0. Any help will be appreciated.

ellkend-aws commented 3 years ago

@shresthaankit7 Thank you for opening this issue and for your patience. We've been working to better maintain the GitHub version of the EMR Release Guide. It sounds like you have a good question for the support team. I suggest that you post technical support questions on the Amazon EMR developer forum, which is a good resource: https://forums.aws.amazon.com/forum.jspa?forumID=52