Closed indygreg closed 11 years ago
Looked into this a while ago. Don't know how to fix it. If you figure out a fit, send a pull req.
I met this problem,too I'd wonder if you have solved this.
You have to put a "log4j.properties" file somewhere on the classpath.
Thx, I've put once, but it didn't work. what is in your log4j.properties?
2014-04-24 2:42 GMT+08:00 Mark Reid notifications@github.com:
You have to put a "log4j.properties" file somewhere on the classpath.
— Reply to this email directly or view it on GitHubhttps://github.com/mozilla/jydoop/issues/27#issuecomment-41198790 .
I accidentally pushed a script that contained some errors. Hadoop's output spewed a lot of the following:
attempt_201304170845_0427_m_000013_2: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient). attempt_201304170845_0427_m_000013_2: log4j:WARN Please initialize the log4j system properly. 13/04/19 21:29:54 INFO mapred.JobClient: Task Id : attempt_201304170845_0427_m_000009_2, Status : FAILED Traceback (most recent call last): File "/data3/hadoop/mapred/mapred/taskTracker/gszorc/jobcache/job_201304170845_0427/jars/job.jar/scripts/healthreportutils.py", line 156, in wrapper File "scripts/fhr_session_counts.py", line 15, in map File "/data3/hadoop/mapred/mapred/taskTracker/gszorc/jobcache/job_201304170845_0427/jars/job.jar/scripts/healthreportutils.py", line 26, in get File "/data3/hadoop/mapred/mapred/taskTracker/gszorc/jobcache/job_201304170845_0427/jars/job.jar/scripts/healthreportutils.py", line 96, in telemetry_enabled File "/data3/hadoop/mapred/mapred/taskTracker/gszorc/jobcache/job_201304170845_0427/jars/job.jar/scripts/healthreportutils.py", line 121, in iterdays File "/data3/hadoop/mapred/mapred/taskTracker/gszorc/jobcache/job_201304170845_0427/jars/job.jar/scripts/healthreportutils.py", line 26, in get File "/data3/hadoop/mapred/mapred/taskTracker/gszorc/jobcache/job_201304170845_0427/jars/job.jar/scripts/healthreportutils.py", line 88, in days File "/data4/hadoop/m
While the stack trace is mine, I suspect the log4j messages are due to jydoop not ideally integrating with Hadoop/log4j.