NREL / nfp

Keras layers for end-to-end learning with rdkit and pymatgen
Other
57 stars 28 forks source link

Improved Support for Distributed Training #4

Closed WardLT closed 5 years ago

WardLT commented 5 years ago

This PR adds support for ensuring that each replica has different training data during data-parallel training. This is accomplished by setting the same random seed for each GraphSequence and shifting the training data after shuffling.

WardLT commented 5 years ago

Note that this PR uses features from #3.

pstjohn commented 5 years ago

These are fantastic, thanks for the contribution! Should I merge this one and close #3?

WardLT commented 5 years ago

No problem. Thanks for making this open source! 😄

Sure, merging this and closing #3 would work for me.

pstjohn commented 5 years ago

Interesting, looks like github was smart enough to merge both :)