spring-attic / spring-hadoop-samples

Spring Hadoop Samples
Apache License 2.0
492 stars 466 forks source link

Having issues with your mapreduce example #4

Open brendangreene opened 10 years ago

brendangreene commented 10 years ago

I've installed Hadoop-2.2.0 and I'm able to run the hadoop samples just fine from the command line. Interested in using Spring though so I tried this sample. Whenever I run the job I get this output:

sh ./target/appassembler/bin/wordcount 09:24:53,391 INFO t.support.ClassPathXmlApplicationContext: 510 - Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@370410a7: startup date [Sat Jan 18 09:24:53 MST 2014]; root of context hierarchy 09:24:53,540 INFO eans.factory.xml.XmlBeanDefinitionReader: 315 - Loading XML bean definitions from class path resource [META-INF/spring/application-context.xml] 09:24:53,914 INFO ort.PropertySourcesPlaceholderConfigurer: 172 - Loading properties file from class path resource [hadoop.properties] 09:24:53,938 INFO ctory.support.DefaultListableBeanFactory: 596 - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@4fe596de: defining beans [org.springframework.context.support.PropertySourcesPlaceholderConfigurer#0,hadoopConfiguration,wordcountJob,setupScript,runner]; root of factory hierarchy 09:24:54,096 INFO he.hadoop.conf.Configuration.deprecation: 840 - fs.default.name is deprecated. Instead, use fs.defaultFS 2014-01-18 09:24:54.573 java[4237:1703] Unable to load realm info from SCDynamicStore 09:25:28,945 WARN org.apache.hadoop.util.NativeCodeLoader: 62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 09:25:29,488 INFO he.hadoop.conf.Configuration.deprecation: 840 - mapred.used.genericoptionsparser is deprecated. Instead, use mapreduce.client.genericoptionsparser.used 09:25:29,501 INFO org.apache.hadoop.fs.TrashPolicyDefault: 92 - Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. 09:25:30,707 INFO org.apache.hadoop.yarn.client.RMProxy: 56 - Connecting to ResourceManager at localhost/127.0.0.1:8032 09:25:30,759 INFO ramework.data.hadoop.mapreduce.JobRunner: 192 - Starting job [wordcountJob] 09:25:30,790 INFO org.apache.hadoop.yarn.client.RMProxy: 56 - Connecting to ResourceManager at localhost/127.0.0.1:8032 09:25:31,055 WARN org.apache.hadoop.mapreduce.JobSubmitter: 258 - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 09:25:31,111 INFO doop.mapreduce.lib.input.FileInputFormat: 287 - Total input paths to process : 1 09:25:31,260 INFO org.apache.hadoop.mapreduce.JobSubmitter: 394 - number of splits:1 09:25:31,271 INFO he.hadoop.conf.Configuration.deprecation: 840 - user.name is deprecated. Instead, use mapreduce.job.user.name 09:25:31,272 INFO he.hadoop.conf.Configuration.deprecation: 840 - fs.default.name is deprecated. Instead, use fs.defaultFS 09:25:31,275 INFO he.hadoop.conf.Configuration.deprecation: 840 - mapred.mapoutput.value.class is deprecated. Instead, use mapreduce.map.output.value.class 09:25:31,276 INFO he.hadoop.conf.Configuration.deprecation: 840 - mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class 09:25:31,276 INFO he.hadoop.conf.Configuration.deprecation: 840 - mapred.job.name is deprecated. Instead, use mapreduce.job.name 09:25:31,276 INFO he.hadoop.conf.Configuration.deprecation: 840 - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 09:25:31,277 INFO he.hadoop.conf.Configuration.deprecation: 840 - mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class 09:25:31,277 INFO he.hadoop.conf.Configuration.deprecation: 840 - mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir 09:25:31,277 INFO he.hadoop.conf.Configuration.deprecation: 840 - mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 09:25:31,278 INFO he.hadoop.conf.Configuration.deprecation: 840 - mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 09:25:31,278 INFO he.hadoop.conf.Configuration.deprecation: 840 - mapred.mapoutput.key.class is deprecated. Instead, use mapreduce.map.output.key.class 09:25:31,279 INFO he.hadoop.conf.Configuration.deprecation: 840 - mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir 09:25:31,412 INFO org.apache.hadoop.mapreduce.JobSubmitter: 477 - Submitting tokens for job: job_1390012296433_0009 09:25:31,641 INFO org.apache.hadoop.mapred.YARNRunner: 368 - Job jar is not present. Not adding any jar to the list of resources. 09:25:31,705 INFO doop.yarn.client.api.impl.YarnClientImpl: 174 - Submitted application application_1390012296433_0009 to ResourceManager at localhost/127.0.0.1:8032 09:25:31,748 INFO org.apache.hadoop.mapreduce.Job:1272 - The url to track the job: http://Admins-MacBook-Pro.local:8088/proxy/application_1390012296433_0009/ 09:25:31,749 INFO org.apache.hadoop.mapreduce.Job:1317 - Running job: job_1390012296433_0009 09:25:35,778 INFO org.apache.hadoop.mapreduce.Job:1338 - Job job_1390012296433_0009 running in uber mode : false 09:25:35,780 INFO org.apache.hadoop.mapreduce.Job:1345 - map 0% reduce 0% 09:25:35,796 INFO org.apache.hadoop.mapreduce.Job:1358 - Job job_1390012296433_0009 failed with state FAILED due to: Application application_1390012296433_0009 failed 2 times due to AM Container for appattempt_1390012296433_0009_000002 exited with exitCode: 127 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: at org.apache.hadoop.util.Shell.runCommand(Shell.java:464) at org.apache.hadoop.util.Shell.run(Shell.java:379) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)

.Failing this attempt.. Failing the application. 09:25:35,850 INFO org.apache.hadoop.mapreduce.Job:1363 - Counters: 0 09:25:35,858 INFO ramework.data.hadoop.mapreduce.JobRunner: 202 - Completed job [wordcountJob] 09:25:35,876 INFO org.apache.hadoop.yarn.client.RMProxy: 56 - Connecting to ResourceManager at localhost/127.0.0.1:8032 09:25:35,914 INFO ctory.support.DefaultListableBeanFactory: 444 - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@4fe596de: defining beans [org.springframework.context.support.PropertySourcesPlaceholderConfigurer#0,hadoopConfiguration,wordcountJob,setupScript,runner]; root of factory hierarchy Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'runner': Invocation of init method failed; nested exception is java.lang.IllegalStateException: Job wordcountJob] failed to start; status=FAILED at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1488) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:524) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:461) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:626) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:932) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:479) at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:197) at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:172) at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:158) at org.springframework.samples.hadoop.mapreduce.Wordcount.main(Wordcount.java:28) Caused by: java.lang.IllegalStateException: Job wordcountJob] failed to start; status=FAILED at org.springframework.data.hadoop.mapreduce.JobExecutor$2.run(JobExecutor.java:223) at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:49) at org.springframework.data.hadoop.mapreduce.JobExecutor.startJobs(JobExecutor.java:172) at org.springframework.data.hadoop.mapreduce.JobExecutor.startJobs(JobExecutor.java:164) at org.springframework.data.hadoop.mapreduce.JobRunner.call(JobRunner.java:52) at org.springframework.data.hadoop.mapreduce.JobRunner.afterPropertiesSet(JobRunner.java:44) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1547) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1485) ... 13 more

Any thoughts on this? Looks like the job is connection to hadoop and the resource manager...

trisberg commented 10 years ago

Hi,

I'm assuming you built using the '-P hadoop22' profile.

It looks like the application is starting but then failing - anything in the resourcemanager or nodemanager logs indicating a failure to run the job?

brendangreene commented 10 years ago

I'm seeing this in my resourcemanager logs:

2014-01-21 20:34:26,801 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application application_1390064045818_0006 failed 2 times due to AM Container for appattempt_1390064045818_0006_000002 exited with exitCode: 127 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: at org.apache.hadoop.util.Shell.runCommand(Shell.java:464) at org.apache.hadoop.util.Shell.run(Shell.java:379) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)

.Failing this attempt.. Failing the application. 2014-01-21 20:34:26,801 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1390064045818_0006_02_000001, NodeId: 192.168.1.135:56710, NodeHttpAddress: 192.168.1.135:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.1.135:56710 }, ] resource=<memory:2048, vCores:1> queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8> 2014-01-21 20:34:26,801 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Removing info for app: application_1390064045818_0006 2014-01-21 20:34:26,801 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1390064045818_0006 State change from ACCEPTED to FAILED 2014-01-21 20:34:26,802 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8> 2014-01-21 20:34:26,802 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bgreene OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1390064045818_0006 failed 2 times due to AM Container for appattempt_1390064045818_0006_000002 exited with exitCode: 127 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: at org.apache.hadoop.util.Shell.runCommand(Shell.java:464) at org.apache.hadoop.util.Shell.run(Shell.java:379) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)

.Failing this attempt.. Failing the application. APPID=application_1390064045818_0006

trisberg commented 10 years ago

So, it looks like you are missing some permissions on the file system. You could try to change the following lines in hadoop.properties

wordcount.input.path=/user/gutenberg/input/word/ wordcount.output.path=/user/gutenberg/output/word/

change that to some directories that your user has write permissions for.

pooleja commented 10 years ago

Did you try looking at the attempt logs at the following location? http://Admins-MacBook-Pro.local:8088/proxy/application_1390012296433_0009/

I just tried running the map reduce job against a generic hadoop cluster and go the same error. When I looked at the attempt logs, I see this:

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/v2/app/MRAppMaster Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapreduce.v2.app.MRAppMaster

According to SO, it looks like there might be something wrong with the Yarn class path???

http://stackoverflow.com/questions/20699632/hadoop-2-2-word-count-example-failing-on-windows-7

v5tech commented 10 years ago

It may be in yarn-site.xml you had set yarn.application.classpath and in /etc/profile you had set yarn.application.classpath too. the yarn.application.classpath you defined two times. you should delete one of yarn.application.classpath and restart hadoop clusters then run again.

akshatthakar commented 9 years ago

I faced similar issue. I traced the roor cause of issue by checking logs on History server using Web Url.

This exception is in launching Container for MR job. This was due to classpath issue, I had some jars(avro jars) which were clashing with Yarn framework jars. I removed those jars from job jar and it worked.

vvelayutham-mwb commented 9 years ago

i am still facing the issue. We have CDH5.1.2 and today we deployed kerberos security on top of it. The installation was done almost 8 months back without kerberos and we have a MR running for past 6 months. After the kerberos setup was done, when i run the same MR, its failing to load the third-party library that we set in the HADOOP_CLASSPATH variable.

[ec2-user@etl ]$ hadoop classpath /etc/hadoop/conf:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop/lib/:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop/.//:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/lib/:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/.//:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-yarn/lib/:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-yarn/.//:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/lib/:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.// [ec2-user@etl ]$ export HADOOPCLASSPATH=hadoop classpath:/home/ec2-user/framework/framework.jar:/home/ec2-user/framework/lib/java-json.jar:/home/ec2-user/framework/lib/log4j-core-2.0.1.jar:/home/ec2-user/framework/lib/log4j-api-2.0.1.jar:/home/ec2-user/framework/lib/commons-dbcp-1.4.jar:/home/ec2-user/framework/lib/commons-pool-1.6.jar:/home/ec2-user/framework/lib/mysql-connector-java-5.1.31-bin.jar: [ec2-user@etl ]$ [ec2-user@etl ]$ hadoop classpath /etc/hadoop/conf:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop/lib/:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop/.//:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/lib/:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/.//:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-yarn/lib/:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-yarn/.//:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/lib/:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//:/etc/hadoop/conf:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop/lib/:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop/.//:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/lib/:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/.//:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-yarn/lib/:/opt/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop/libexec/../../hadoop-yarn/.//:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/lib/:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//_:/home/ec2-user/framework/framework.jar:/home/ec2-user/framework/lib/java-json.jar:/home/ec2-user/framework/lib/log4j-core-2.0.1.jar:/home/ec2-user/framework/lib/log4j-api-2.0.1.jar:/home/ec2-user/framework/lib/commons-dbcp-1.4.jar:/home/ec2-user/framework/lib/commons-pool-1.6.jar:/home/ec2-user/framework/lib/mysql-connector-java-5.1.31-bin.jar:

Hadoop environment is set in the path and when i run the hadoop/yarn command it fails and the application log says: 2015-03-31 00:19:19,625 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster java.lang.NoClassDefFoundError: org/json/JSONException at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:190) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.isChainJob(JobImpl.java:1292) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.makeUberDecision(JobImpl.java:1234) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.access$3700(JobImpl.java:139) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1435) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1373) at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385) at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:986) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:138) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1249) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1049) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:1460) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1456) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1389) Caused by: java.lang.ClassNotFoundException: org.json.JSONException at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 22 more

This was a running MR and it starts failing since today. Only change done was adding the kerberos setup in the configuration object and setting userinformationobject. After running the build and updating the JAR, we started having failures. Any clues from anyone would be really helpful.