Closed amitjaiswal closed 10 years ago
I'm looking into this. Thanks for reporting the issue.
BTW, do you know that you can increase the number of counters per job by using the mapreduce.job.counters.limit
property (the default is 120)?
Please wait a bit for the new SNAPSHOT to be published.
I am using scoobi_2.10-0.8.0-cdh3-SNAPSHOT and seeing failures in job because of too many counters.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Counters Exceeded limit: 120 at org.apache.hadoop.mapred.JobTracker.getJobCounters(JobTracker.java:4034) at sun.reflect.GeneratedMethodAccessor48.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1444) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1440) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1438)
This is happening either during or at the end of the job execution. On some debugging, I found that the scoobi jobs are creating a counter unique for each map task like:
scoobi.counter.mapper.values mapper-0 0
Is it possible that when all the counters are aggregated, then the total number of counters exceeds the default limit leading to job failure?
Can somebody please look into this.
Thanks. Amit