PlanTL-GOB-ES / corpus-cleaner

Generic toolkit for corpus cleaning
MIT License
5 stars 0 forks source link

Tackle distributed computing limitation with 50<x<600 nodes #82

Open asier-gutierrez opened 3 years ago

asier-gutierrez commented 3 years ago

Memory and/or IO overflows when computing with large amounts of nodes.

We should discuss which is the safest and more intelligent way of fixing this issue.

Examples of fixes:

However, the enhancement should take into account the distribution strategy depending on the size of the files. For instance, it is not the same to distribute computation with 600 nodes in 1 master with 599 slaves of 1mb files vs 100mb files. It may be the case that 1m599s strategy works in the case of 1mb files and not in the other. We have to find out an strategy that distributes the work robustly and efficiently in all the posible scenarios.

asier-gutierrez commented 3 years ago

It is a problem of both IO and master's RAM. Depending on the data, one problem will show up sooner than the other. In the case of BNE, the first problem when scaling nodes is IO first and RAM second.

Proposal of fixes:

  1. Change the architecture: M masters, S slaves and D data nodes (the ones that will load the data and distribute through the infiniband network).
  2. There is a 200 GB local SSD available as temporary storage during jobs ($TMPDIR=/scratch/tmp/[jobid]) according to the documentation. We could make the master node to load on the 200GB SSD and actively remove and add the information. This is a little bit messy.
  3. Equally distribute to all the local nodes storage (tmpfs) the data that they have operate with.

And the inconveniences:

  1. BSC-specific fix that will limit the number of concurrent nodes.
  2. BSC-less-but-still-specific fix that it is likely to be slow due to the fact that tmpfs is HD and not SSD.