allenai / deep_qa

A deep NLP library, based on Keras / tf, focused on question answering (but useful for other NLP too)
Apache License 2.0
404 stars 132 forks source link

Documentation for doing model parallelism on multiple GPUs #321

Open matt-gardner opened 7 years ago

matt-gardner commented 7 years ago

With dropping theano support, it should be easy to make our models use multiple GPUs, not just with batch parallelism, and to put some parts of the model on the CPU (e.g., the embedding layer, as recommended by Matt Peters). I think this is pretty straightforward, but I haven't done it before. We should:

  1. Write some documentation with recommendations for how and when to use this (thinking of people new to the codebase and to deep learning in general; can we give them some guidance on how to structure a model for optimal efficiency?).
  2. Implement some reasonable defaults, like putting the embedding layer on the CPU, in TextTrainer.
matt-gardner commented 7 years ago

326 does point 2 above, but not point 1 yet.

matt-gardner commented 7 years ago

With the batch parallelism PR merged, I'm renaming this issue to focus on the one remaining thing: I believe that models can currently use model parallelism if you want, by using device scopes. Making sure this works and providing some documentation for it would be nice, but not super high priority.

DeNeutoy commented 7 years ago

I think that the more important aspect of parallelism still left is to get it working with the various types of data generators/padding stuff we have, rather than model parallelism, but yeah in general it would be nice to double check that this works as smoothly as it might do.

matt-gardner commented 7 years ago

Agreed, hence the P2.