Open czha168 opened 9 years ago
Hi Charlie,
Thanks for the feedback.
The error message is indeed hadoop compatibility issue. It appears you have compiled Cubert for hadoop 1.x.
If you check the gradle.properties file, it will probably look like this:
org.gradle.daemon=true
group=com.linkedin.cubert version=0.1.0
gradleVersion=2.1
avroVersion=1.7.4 antlrVersion=4.3 jodaVersion=2.4
hadoopVersion=1.2.1 pigVersion=0.13.0 avroMapredVersion=1.7.4
org.gradle.jvmargs="-XX:MaxPermSize=512m"
systemProp.file.encoding=utf-8
To compile for hadoop 2, comment the three lines below "Properties for Hadoop 1" and remove comments for the three lines below "Properties for Hadoop 2".
Hope that works for you.
Thanks -Maneesh
PS: I am also looking into your first email (Cubert-* vs cubert-*).
On Wed, Nov 12, 2014 at 11:55 AM, czha168 notifications@github.com wrote:
Hi, gang,
When I try to run tutorial, I got the following error message:
Exception in thread "main" java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected at com.linkedin.cubert.io.CubertInputFormat.getSplits(CubertInputFormat.java:74) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303) at com.linkedin.cubert.plan.physical.JobExecutor.run(JobExecutor.java:148) at com.linkedin.cubert.plan.physical.ExecutorService.executeJob(ExecutorService.java:229) at com.linkedin.cubert.plan.physical.ExecutorService.executeJobId(ExecutorService.java:196) at com.linkedin.cubert.plan.physical.ExecutorService.execute(ExecutorService.java:140) at com.linkedin.cubert.ScriptExecutor.execute(ScriptExecutor.java:301) at com.linkedin.cubert.ScriptExecutor.main(ScriptExecutor.java:517) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Could this be Hadoop compatibility issue? I am using Apache Hadoop 2.5.1.
Best, Charlie
— Reply to this email directly or view it on GitHub https://github.com/linkedin/Cubert/issues/3.
Hi, Maneesh,
That solved the problem. First tutorial worked! I will check the second one.
Thank you, guy.
Charlie
Hi, Maneesh,
Both tutorials worked just great.
Thank you for your help.
Charlie
Hi, gang,
When I try to run tutorial, I got the following error message:
Exception in thread "main" java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected at com.linkedin.cubert.io.CubertInputFormat.getSplits(CubertInputFormat.java:74) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303) at com.linkedin.cubert.plan.physical.JobExecutor.run(JobExecutor.java:148) at com.linkedin.cubert.plan.physical.ExecutorService.executeJob(ExecutorService.java:229) at com.linkedin.cubert.plan.physical.ExecutorService.executeJobId(ExecutorService.java:196) at com.linkedin.cubert.plan.physical.ExecutorService.execute(ExecutorService.java:140) at com.linkedin.cubert.ScriptExecutor.execute(ScriptExecutor.java:301) at com.linkedin.cubert.ScriptExecutor.main(ScriptExecutor.java:517) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Could this be Hadoop compatibility issue? I am using Apache Hadoop 2.5.1.
Best, Charlie