dgadiraju / code

192 stars 590 forks source link

Performance improvements issue using CombineTextInputFormat with many small files #2

Closed 15astro closed 8 years ago

15astro commented 8 years ago

Hi dgadiraju, I have tried your code for testing performance improvements using small files on local machine.

Scenario: I have a 100 small files ~30MB each. Case 1: When map-reduce job was executed with these 100 files rowcount.java, 100 mappers completed the job in ~10 minutes.

Case 2: When the job was executed using RowCountCombinedFileInputFormat.java, 25 mappers were created and the job took ~33 minutes to complete. I haven't noticed the performance improvements using CombineTextInputFormat. What may be the possible issues that causing this?

dgadiraju commented 8 years ago

What is the size of the cluster?

15astro commented 8 years ago

This is local env. Pseudo-distributed mode. 8GB Memory. 4 CPU's

dgadiraju commented 8 years ago

It is very tough to see the expected behavior while running in local or pseudo distributed mode. You need to try it out in true cluster. By reducing number of mappers to 30 instead of 100 need not result in reduction in execution time. But it will use fewer resources on large cluster which means more jobs can run simultaneously.

15astro commented 8 years ago

I was just wondering why don't I see performance improvement referring this post that uses CombineFileInputFormat.

Since all other performance metrics discussed on series have shown noticeable performance improvement even on the local machine.

dgadiraju commented 8 years ago

Can you paste counters information of both runs?

15astro commented 8 years ago

Hi dgadiraju, Below are the details of both runs:

small_files_git_before_log.txt

small_files_git_after_log.txt

Thanks.

dgadiraju commented 8 years ago

Before one is map only job, there are no reducers involved in it. These runs cannot be comparable.

These below lines are in after, but not in before. So they are not functionally same to compare. Reduce input groups=1 Reduce shuffle bytes=3271558374 Reduce input records=272629852 Reduce output records=1

15astro commented 8 years ago

I haven't made any changes in code. But I see following logs in both of the runs:

before: 16/03/06 21:28:24 INFO mapred.LocalJobRunner: reduce task executor complete after: 16/03/06 22:10:03 INFO mapred.LocalJobRunner: reduce task executor complete.

Also, job RowCount.java (before) has NoKeyRecordCountReducer reducer. These job looks no different than using combineTextInputFormat in later case.

Thanks for the help!

dgadiraju commented 8 years ago

I am not sure. As mentioned earlier, running local mode gives inconsistent results. At least you need to consider VM and set up full cluster on single node. Best is to use Cloudera or Hortonworks.

15astro commented 8 years ago

The issue is with the execution time since second run took 3 times as compared to first run. Also there were plenty of resources available on local machine while running the job. There are also chances that the time difference ration would be same as the before job(100 mapper) possibly take less than 10 minutes. If executed on cluster.

dgadiraju commented 8 years ago

I am not able to figure out with the information provided. Counters is not showing the true picture of what you are saying. Clearly "before" is not using reducers as per the counter information.

These does not mean that there is reduce task. If you go through the progress, "before" reduce progress have reached 100% directly where as "after" it became 100% incrementally.

before: 16/03/06 21:28:24 INFO mapred.LocalJobRunner: reduce task executor complete after: 16/03/06 22:10:03 INFO mapred.LocalJobRunner: reduce task executor complete.

I am not 100% sure about pseudo distributed mode and I always prefer VMs with full cluster on it. Hence I cannot give proper explanation.

I am 100% sure that your comparison is not right. "before" runs as map only and "after" runs using reducers. You have to troubleshoot from that angle and see why "before" is not using reducers while "after" is using reducers.

15astro commented 8 years ago

Hi dgadiraju, I tried rerunning both of the jobs, the before and after. Now I see the progress of reduce phase incrementally. I have attached the output of runs. So this becomes a valid comparision.

However, the difference in execution time still present(41 minutes & 31 minutes). Any thoughts on this? Thanks! small_files_before_git.txt small_files_after_git.txt