It would be nice to have something similar to hadoop's counters, to keep track
of how many times various things happen in the slaves. It seems like it would
be most appropriate to tie the counters to the job object that goes into the
run(self, job), but the map and reduce functions need access to them, so maybe
they should go in mrs.MapReduce, so the map and reduce functions can get to
them at self.counters, or something?
Original issue reported on code.google.com by sabrina....@gmail.com on 31 Oct 2012 at 4:58
Original issue reported on code.google.com by
sabrina....@gmail.com
on 31 Oct 2012 at 4:58